var/home/core/zuul-output/0000755000175000017500000000000015136066652014537 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136073406015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000243103215136073231020254 0ustar corecorevxikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD p}FY담I_翪|mvşo#oVݏKf+ovpZjl!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,Sc̝G?ƥF%QV5pDVHwԡ/.2h{qf]ɻ_\x"gGO%k3Ɵ>Nz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIo>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&?lm$K/$s_. WM]̍"W%`lO2-"ew@E=Y]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓIwMm[eG`̵E$uLrk-$_{$# $B*hN/ٟH(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<˻. i= w_ ޭhd+xӁvGT.+-k)j_J>ɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@t;Fq->sr,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;)>ohǖVa[|E7e0ϕw 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF}'qPU嗈M9VS;a+Mqܙ7'qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqC*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; [>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;2)N.w7?|+qU?^oå~4en\.c~X[s'wSSۘf .D s}Y,J[}Z=-/̍ݥ*n./cus}]\>\\^'quߏ -%W[~b{5ߓ-'W[~O |ɸCz 떲2(ѭm_9p -h2 dֲ 1"j {]]Nk"䁖%5'32hDz O\!f3KX0kIKq"H~%.b@:Oec6^:V8FDza5H`:&Q5 ^hI8nʁu EA~V O8Z-mYO!tO֠υ9G`6qmJc,Qh: ݢKNw2taC0Z' O > f-`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&ĿbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qr^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?çMR3V:<}xXh//T+coY5Ȧ4/m0NE(G2[+G~H'5ipӘ͏O +Px SPp.,?Uv|$wm#_} %+uq7;M)3_IULb'aׯuvWE"тF&أ$n,e%ldKU`2,c)J^UHe:^Kpc@"(e%Jpl J%\HI~3Z.V0j^MY՛TqVܞŗ$~c^8DŏV d*Rh5œ<P,& qHieFU<Aǖ?VQi\U\msޠVɰW}6r?U#FM:`^o.dޚtg^wBKfy /H&ψ#; ";M]܇wK) ێ$y}lE㽓[>3"CuXp]68vNMݴ/tds=tc'kdliX֣f1F $ƀ 7tLu\`B O{K}O7Ʒ'vXڮmd#xr+ .)]^XՅ“wGBaѨT_³*uo#OngdB̉гA>}ׅ {Ēf.dV׋D/U^;jpOa!ZR5Vy(U?23uxye fxЏEtӲD3<3ln(nfgzy1A.% 1 ȬIU<a~L'BLu]Tgd7jV겚O %oIUʛGIDfjc鳹&R]'ZԔ{CT~Yfh`6ު`~O$RB'qS7~ ^NG,|iu%A'_4]4%ߐsx9W_0JL''y!ρϢ t6w/QlD767 tt6 }dh8GX[F'2HD <yR+>]t"T_9~u[iڦ^],8J8Ѳ'Myzr䃨 K7$$3[8h2[vs*Et|Gu2Hⱝ D+ o쎡Ko9/GӰ;c嚧E"(ܙb!J8"NH{h]Th? `32z>%""]QK@Ur ޥh~麦% 3yduKH `Ǹנ]~OpHwwVC~ л$@ގa[Y }8ɛL kp{7m﮵QCG˦3nBd?K$CY֊cw->i@+9>ҩ3;[>~1RHviwe/ENr=FN?*ڻbJ;u`*~Uޔ!!cfI[VŢ%N;j<2/uMA%6nYx[/o:o<& O(Q65u!zכDY}:ngzػxԫABXT8kw-Eؔ`No $> ?M^/Dxy*BTݦ 9T""{u-468B{"?uH`LiqGM.@JS}dIAxd|C"OIԝ9/'go$h 2jx.^BJ 'Ir))Z,9=9m=~ l>foEʳ-;^/##JeB({w'ƣ>mw@fl$&x, iX'@it7Ki&͛Jë(nZG)Eok!^\ЉTOV{QVhW"XA̓f@k13+R#˾Nq%W=oyE#!~w`Kɬ<McQlf@ɨ=}x0+ 4@޶ -ol Dx*D.=aF) Yc|WRߢ¸>cq{h3!yD)Ͱ [M8wm.o 8$.WiPHIEԴoO2^`8ݖ"Fy5Z$!D[AD簡>e5qw+C`?,yO~::U]Phm-h(|PD:hjĊm}.; mHE"MRRϡ߽Det}KbMy -u"ug[}; )uՆ%3M)Y%3S𷄁};k6gi-9AE^n8|Cf$M-yb͕A0`ڞNqL +Ee瀖 K+^h[5fB% +KT/Z'JOmRՅ wETR=;2 8du2 .K W \Ke6Xb J͉OSE{-Yt[Ã_V qM^۬o#Ubj 0pBIz(l&ݦx-6]KjvS<ժ+.$"qU壻. =QpxQ{lA[t!Uu)TTLWkP7TܣsTrK!3ÜuCD zHĭJc ˘<,#!STnx RjVn&pM]O_b!RQVy-[?z7}Af6H5* Z'Uw˛^Yb$qӪF,U$}+THFifLcyoQtDzkePݔx@ sY3?R] E/#8C4} Ѿgܗve2f{6c]q{`}iaP%~Ld08qAzݣ7 F[02ԛ';DԷ7Oe rs`Hd/c'Mc/&| A&̶>":>m\Oil߃&ecܓ፳s*!38EflvTJ&;2tRfS ;kO<Ǐ־H0'Agަs%4^؝~}0 xWӚi0Bne؁R-mnoo2x6sw6>>R;.wNDBEt0aCβg!ErEm`w-m[ bwo$ 3$`FNHf#{ )lY^ؤFi7EpKBX$fi9hDtMA~mEBZ|Y]1!-L!cݾW4O!- zO)vsL} 6ظ'gJpkVidfMٕ߸"~'\ByHZXێ䀛 s_ʮՅxNg^P#"nGBEn8QOf#'ZLR\Љ-I->4^I2 !‘>Y%V *HCa:JpWdZ0[+ንֹ@A 9^%[2bBU\\ԩ`y{/Q%,8y 'R8S;6s#TD䃭+~^(VY6 (qbfS/R$xY^.sʃ=G>$Ŋ41muxyw9c4\նqָpWDݢEB<#/y~p^j1ŕkrt"4}fŲ!Dv4/ U(dduR~R-GscBCR0Y^ƈ&%ʲYY8{Mp ח˗EĤ/ܱ:<"iPʬ`su9{ \* X/1(<_Ts"?KE[{ۯ>:IDɌ($SFΨ\%\~,uȸت+5/C8t&=:=&Wś fѡ//v0Gew otR /-3*}t$d9T9j.8u%>pjMRHGEM(yK%8%<{^ЍLIq_D4\W2E (@UR"~Vu88 Ő!B.d鐂d'-lH_ IHƇ_#D_&xF!(#J&~r*`>*2IDyFO$`%1p M;܌ 5k>MӢ<~.`#9|]L򶂓b\?GgJTf옂 >J&- % _FP|A?*>Dx:*/Z W'5ap*èNaԃ=PtwPp[Ւ"$eY LLBdCE* yYpoMKB:lB$B!e)cP#G;lq;;>Cر{pw]N-߁X;q=);@;u$݁PoPoB G';N8B= w 4X'4؁`wBIhbPbwB{*v T*w TN|rOB넆;Nh8B= 7Ksߤ/K }Ah󴸞BoQf1E.|Z<MV*J4q\b1'EUH18;"ٕ22+k)} F=$*p|-eoϼ>y\}l{CciӬ|W5Oax0+*#ōeޝ~ѠL9ߣ%NrAyxr|rk{p bzb3>5axvce`hEBq`V UKyh8ß*KIؠ  Sح5@r!Ǐe%RͲD L  1:£?Ya6ľ @#ZM$_bG0x(,]&ߠDt^8uʨ#ERUh @I6H`xvԂZi\ { 'ab5"5 ]nv;syǞ5VewXe@-<_fx11N3eLu/+Z4zWN!= tf7L5kx[ [Ag\%2_-^g;y0 wYy2.*2E>OTGA~]mY{冃gjqk]>E1܇|5LFdqaea%u(LDlNvEXu$qzi0ؒSG(} u jB-\hwg{t@5F2_>;_/ଡ଼5Y5ܗݞCൾ.QOt?8fm=WK^8||xi|o&cK>p]3mq>/¥j[%#tGu4pׯ/jud [}fQ#J`֯[ׂv뢼V!Y;mvy^ ;D^L,yOݼ 6kHQeFʒJP7ڔc'S.b\ ]ӑo 0l|JteLK+5Ck2h%T$d-k)bգz5]ׄb֦륅EEiF o~n՘јdq],e У _<il#P`%5鋌Zj+G\G( j%HUTp<-Y:Kodzu=Wl6δ9' tnLXWFt1P5)IEVPof/*?&񭾽x4G7q'I\RxNϫvsꮑVOo2-} cK/g,s;~AG~77*ay7=G񻿱-ۏ4Dj~Ҳcu}g ; 푹I3͜gkt)H"^0VhSкG:iP8ά(PLTvCT*)#{p?P$A t]tcDVr„ࣳ!p4eH a삜Ֆ(-h$d4Dk\b< 9$f}s3Df2h?ޫRh4ATʈ)ʐ ]ٶ,X2fP,̨QcBQe\Af:k.h" 0`\(E]rϐA.!åI"  r,J0G. 1j4L5s4).pb=>tQőeκe("sDL"k;؟  UK P):7twIal<#qjb7 d s4/T4).p܊X+G,z 5W |mZUp1JHtf~x#Eh䌵 tKҪdfu)biOX8"mFbvPܴV U|EӼA9ֺhУK J ".T A4JnQCUQ(&% B~z|ݭլ,A>_s`[ Zn ь=RPc˗VA$*gYhE"ϧMUtw R(]r[ %bD0JC50<Yf):Fr0QxZ`k0Fs Kv1^3Ѻ_`?81–Bs(/DU4dWɴ :3IӽǦL ƺ0 "AxBÈm ICDbؚ!8Z2v,i"2e~_x}w'sBW1PIGAHw(D"z}3~/{0:|'جg9-JѤ,Y. JA@-JV+1Ota<֟]pTf~j84FlPˋ!Qspʣ*XXVԕv).b"~낣HR|pFۃ(.VrbEe)>.8yup$ٕ8>k3]Ǻ%9ɠ,L( aAI>Oj)L^3uqpP2 +|mWQx%r!fcd MJ g82 e-'ZfT\=vQy.0VW=L<Aҩ'\̖<3ll*pmj1g44(.[Zœp#n3s6_5|p>A$r?6. w|pZ$ x5(w9@ѽKEuIQ Nacmse V񬃶*)_tP~MZ#8Kx#FcWkH~>`s= ~E !A8|߁iJM*Tr!TIS`^maq-s^tJ[>r󝕲:-έ^Ji3+Y9y,k}q$NNMV;WMBa-p in7KbKS97c22;bӌ*kp:toGIg M@װ1H0񩣩P$uZYyWjoa rj,rBn FQ+oDޒ!.zU9RNE lvKV(7W33O#1dlc0$ (.;I]p/d6ϟwe-w3:^HQB-Ķ6 v<ɈPf(b_i?ׅ1k^5ݓxS3A@S$]ZAăMwLog1tAPY|Kڭۉf'%Hz0?h3٨|N'Q$]i6Crɦwi>xڎ6#ܘY-jpvT׀K͙)Ʃݎ O . eM _t.d͵8 ).Pyhx(~W4 Ù+nVm(V-P"E{*Àz(k=^9B*r&5pFLZ$H5k!g5x֍Co3mztO;: 7CI<`iX,!EK' TX_ .OKB[5ED'BE{Z>ox.[x~"?zϲw8՘xۆ[N$->bG(h%<Ts)\fDCB414B̊)J (EkƇFAݜ29ƒ(.RMs4+l,+ЉSCZIm-$s4J$)7K)Ōn6ft .G R$ 6b${5+o8wKq{3vpk5y kb1SH~ ē$OBG YO$HވC{ $L}.衈+Giהi1ة[)rH]{{b|Nk|-x;jg ny}[E; \.^l6%hTt{~0f~ 4=-R`20M5#;VPRG(.B/ F楟 \{N7t\ުY:c`i1PεAktF E AL"GqkZI=`՗ . 0Y7:f<=YJB96).v>uQY ˉ\:9fؗqG*蕬raRBz'{%J;%ѓWR՟=:8xlLE-({6Eѹ끟`OjY5$RHA].Otqן6 \jyY&ey{v`)Wq1"b5x[K/Vd6$]X_mGP]ƞp_Y]&oU#!y(_^}p"c?@cKA V5=j0pjDj]@4OOAX=e'H;][Ww)Lk ͒kN5J輘Umdwkgˎ;(e\rpZ{bM_}[E7 6d?%Y}L?2ZB)hPWRuS=}9 ,1!Hitqp} g #zfV52z`~Xvqpj]|h?yYlv?=2!Oňm,+R?B9 "њA$^]/GpZ'ٴ%G0#@DLlnr_Δ/'V71q@n@c7]Uշᆰ`D E G,Q&iHHB-Pac;c[oD+-TH[dm Br= mU{Z J*HLh To؎UXiB?em0j劝 cT%pR J2"[ keBZ`"mELA*M%m![& BJk4h~/KF[/xMOc[4\vv]Y'#6V} {5&y3L'{٧>ne)WٿjiIkXP>8ǭIi3G;0 vjt}bA-$Wz;{E8qɀt:T!FN:TY9uɾeP{uC`~ܐtHuD`-N28utwIt-Sǯ_| \7W, :Wα@;gtDa[Aϛm 5,kFyd2{Ckzy1X7S'+RtY){5C5u#C6 htj ,'RaeWQE;[ znWHIFpr(\, jmޣh@7eQ7h tq8jseٶNvڟ \#B$Q6P̽@q8\PѸ9/_gŒog}-Ddt4=S*idJzB LZ̤ z܉۰º,!.H_;G cHf֛S 7?V~`a|yȲ(~EE}3/{Y?zWv-,ۋwӯ]m/ZD*{X\ xq@rw֠;{Z&P?Gݭ00QO?@G],ĉg纆Po@HB1hE,d{?<7^GlUODD{;?&%+C1̺%>R*pGJR3E++ze$R*aY0}\G|Pw`+s?a%o}#.#- b Lt!xn\ R\qMm {Bn1N;x w(#R㼼U:bt{P ⥸M;Jzb( QM?gyetqw 5;#˼d0'NZq)c¬ d$ E@jG`p[]2vH2v@2HlCKƲ‚*7F@<(`(\OS@qӆ"\n_U9Gٷo^]6UlI;;fK;{+“/.$x祙xi^Th9[JYɇFQ%6ڋh. %VBM_P_%Q*6s*0teE %VR SI"*N60`{0<,b!Q8IЗovMR޻h S3y@L*8hZͪwA򋞙L2;gDp~"+mwĩICI:y2f1/^~;])x2><#XĆBJ:uԌ_Y`XE.GaWg֣v :S :S9'\q!.5f7{TMܚ0oqrgi.Euo rZD}:g%2Mq?D,CQ \٤oa>>UDl$6W֐4jS'a'#d׶3TJ JJqZyt-4YN6f|CR(/_-_8>cE5ؑ N[0^QjoQ 1q'%5>~FlgVC_cľ4I=vuȤ w#=C(C-߿D-]_mJr%̥$XCyhrI\{%9~p+ɃX$7uު 1MS N{=3\ ܎쐾; F2N; o&h03:r4so ΥmH ߐ_cP?IkSd@fUZZiZ*wFKkemQ-Ҁ4z$:܀rJ8P)!c%,8Au.ـ\*Z&\+f)ĹtAf25!En}[ ]L5p5Lkr! 8")<4X;äıX1@,<)m8GI,J+/ Bbi2[sqEּ=z$W~k%K,b`\"O!ĄÄpr1ν0?LX)5&l$X׼/x6Q)~swmĒJIBI4ҚXAk?s]'K?:lAuf3ozӳm[cDRPN#oc-)LGa09Su6Xfm`ZHM_;rA΄Ffn+ pQF1<mo+GB|%e2 )^s\niMIBZmrhCiJL[i$2*q*H=)hB2]th(zd9?Ȩ *.l-0RJ儵18,1+Π %{R!\ z31VG4$9? {xXO*ZmĹ F FpeB#iANz$ 40MT&`Mz$7B?BKMr?cfo+=Do*-=v"-5ڮR4Ռ}mvNbo\_̓E} c^!|3:=dDƭ'7Y{m4JS4e+i`LIcI4>;&`2*],wԀEXacJw Nnk%UL&4S>G; ZbgȀ A ΄ p. 61XC7XFҕ%X; 2@ 6Rf}Y\ZhAJ'i5'g- ( #iHh(=ZXFp%ЂxUȸz.*6icaUC9 c+p7K)Ivk\ImZ0Ngsp kWGچ {m/ː*(nbr P9k,ƍT]]7MI]21TZǂ؆}\,S&OrH.eM%V~ǎm R;MCܜaWm_)bvSt4yˮ_O0_VhHfYf6HJnbnPaSjpgZI|E{E?(ًbuToJ$]Ac7 Ih@@z$?;/x$Cl8:?VQ|SeҋVBIk?_yU7-I}wJ{5s6uٺw獚-GzV:y8OxeGflJ~h^v QٷFs˶p.Dj-7v֍b8\񻫘R|o/ 񷿭3{=9/Ng׳e"$d]V856/>T0.I^qц?rՔmʉbΨ>PѢ)FxwlNDq"V〬ҠV89#Q2 %S؋1g2q2"d~HҌ|tB|܌'CnkE͚iA}gFqh3GMԬp 9zDؐ1 7OwxD;;QGhbr?9D k1rgBQ@@prƹ|@PCE cjv|ֶ% zP]y4;ژwЫJO,;ns\I@#hC>U_oC/aiyᤀ꽉Eml8_>'\(eRU4tVp9tF&g/y<ܜO8_~𭝼CZAXv"4LK0ɉDħ|ĿLǓ^dXRE/L@SB2? lȱʕc[\ba!GYN9刹{߽|~șSd3K~g]bJ!̷M O~\Ԣ\ 7'"¿oVs+_o+J s4lK]1 V'~2_̗iiNIFbd.>YV+}|񑗭 vxLAji@rqtETWt t/ERtx拿P:ۘRϷl+G mPp4%(j`/{tA_MQa\Eˏ&eAk˽@% (ɉH!꘵ j 1$TJ-,H">͉ݑcP?|ᛏW}r܂Y6 0FW"e</XiX|?&dn".ybwq &Sۧ޹ A2eDE>r7בGIhIGy4~l3|Hy YR1N;,7, ; woxxѧo2XEff6Իи9{]p&P3i4M\AS}"H 6ȈI]}5HR)6#l5/6j" _fvAx4ZCaŐAir>W/~`ݛW1r;SA0wܗt;m>*}[|(x%h\'F HR0MįL> ̆IP&# %`Ļ @1GzU,dXcPk& 6=x V) <8_d1[+8rm2ESZB詄تfX;) !Ĝ֖N qYh q` 8CŤWP#·ZI)87-Ge):Tz6-xVX{{H?O^oسr r}]cwo~[8,k{s NM s$рRA%fQ.]Q/lȼHl#v%爍.pT/ b$602$.pLQ,ɎolH^c*k oIj3C|19 O▱M}ٖz0ER M0 h+ #I] oKj8Wp r^8{Jh@21|L0Ryԭjd'×mT0!*82m2RXޖx)&͉vLd33 &:ϩ%*lϻnW+ԭ7%u=@d&:"TboY%I7]]mʏaTbU+H?V%i:BSlMm7+7-mhmd2v |`%Ge'Nڠ>q&jLPu&j oKlOܥW,wifI@8je&e2&;&y9*$7%95G#缾LsS=TFХb%V8RfXtxޖ@O&/ &hn+:ZgZMkיBzے\r%I0:nf%xZs]\xtFuv)d9o3/ ?ƻF[jIC˯wIKIQ,e)߻Mk"bd_=/CK:wE!s;C0s^Wk`2Z 3 2"Eg\9Wvw&֨hgW6?7q{Ono9mІ窼be,tYdqcR>38pH*IG2xxd\%WL>xo29` \Ѫ'|`S`E#hEMTK䙠`a)E>,^ oR[QPM(.S>j9ȴuvӗWTe-[A)H$ѐ!%bDД1yV<4$AUT-uglnB}QVoi* Hv9{/<"|m;m S+dx =-o\S)-b%r,ޭ !rw!LX`//?t:c*U.WiB/&i |`'HRM+xQ9@8*_{~z|lOK <tѝc2zc| " m.Djíc4 ΕWu68YrĶ7'E>Ozzo/ j]lՆ0¢V`U6xbDxp l|/a0mL3FE68#jXTQB19CU t3Ųv{͎xz͎"fū߀OO _ٷ|2#Z@%߀h0lg:~~ې3"'1E4<8`8o$AIuPL7$pFfrDwTC~\VL_yK2mT A+<00T.)>)gp͵yS%lR{]akGXG[C65zZ>Ts@mY݅+W <Ιh# IOl@YhRRV1n ĆܹD۹py"6rH~1#_ & P}o#2~8J9:Tc|T2'ہ2!)|uEi}:3n4vޕ7{Gd[PWPעB~vОV:z_n#'p: &Y7AM FdP MoQAʃjooik 63"* 45"К^&!y:sʺ)v'[p@@=˸ B 55(]pWv:Ve"(DJ n^麛hV=cz >#0d ]0^|UJh9k]na\m/v !navk1>n9` ![܂I)}GvDFB\Nq͒tcLDZHvTBFs|a-|+޽;T)0k/L]5gTq茖T}k{ sLrGaۆ+AA >bv]]y'OAwM(+`j6i>&:|lVLFk}ӊ8up*&θ4^ &T.,.LPOFh91 tBuO}Hb U16Pe̙VE OpƎ7܈8^t;l~#oE<9Oc?wms-h L)_\!,*0ҵytԋ"s0M9,i &Bg#V:"sfQs8E>S.ڔHXԽx/Fd6\aݬ.=);iLrs6Y3a\n6F6Q;>yz^"f\wsoLh"thI+)P䵣 N :0qpB1R*E>SJP(ݗJ0Qs˫^~=!M^妍`;w$+ Qwʜ`%yaE>Dr먿}NV`m1΋+ (+ЊiiR椄DI< l&2dȵFQ\ ^m4ULo'a|"f8I[VW-0(cBgN-5z>8y"ƶ=?g=rJde¨3g4uHx ғYF9AF?1JBqʗ9 Ԗnzͽ's^y6TfPiV>u6&\hlipYX{)j Uܸ~WSQŷS 8A8 4I[* b:kvh<2Ps^s¸Όi%@'O] ݄J;=sDFÝ/Yqw3RDdsWEKJ JuWC;hC E>W弩A3< b T\h}@ ]bjWs#DfTGd4JϹB(BnM(`;!(bVBYm|{}xv,5EIh7 ߞ".ѧ/v`:s(L65 s:.D[6(pKm 1Mq0#2ƅY.Hy?ʄS6񈌆a/ۨbh9DΎtUCv% " {K34,å\e"5W%p_H7C$0tQC>W6?<uJy3~Duh[cPģ}G?K]Dz7vsgWevJu h9W9=q1*M:=gEhv\%+)$yps?gYD:#I`]o19HAF#x }y\W~}*Pagafm- UiCZ,䋁0 5[CШ<l9uAxoiӖ7D,Jٵ܄ZuRY0@l2 G Ėuhi`sqԶhR| /"FN~Ό8'7UV*4J@;qr0hCH^nKPrZeIyeROb;#ǐjջ+oQMoiV\GƟzI@xVIHC@r,v[~ TL;ψylJT7?bN13m囼Q٢r=x'VDMhyI9 eüELYm)q9jqQFgedj/.,i$CrCL_6]Gil\Lr~YJVK#١gEL8ewDFo+<E>OVl\Nl:+'JPxk[T.} |;ʔ /NTT~/f\%%DZHv&H!|;AFo<E>WvL?&4YcGd4<.e h-+D$X$9*yd(JM׻θ7\➥V88s Д_hBKqC9Ék8;5`d} (Ȿb-K&y;a`'1~u̱t!H~dafE>: ^`i䚝q h 1ؒZW >-h-YB &_=*Nb֞p)߼r`T)jRTDʋshPFS֥h'HwiZh!#2cs,MN>"a8t[BxOs}|H()2 Үw"Q{l)7_٩DX}\QoV j<9`Tq[?lnWeK. Jr1Bs0'>O潶Ovlݯ uB#2.OJZyIMƫ,e![]]5łӢVe#<>FT6hq֛Hy`^y'^15W#B=k6Ehpw E@pK@>&mr Jq=3+J-)\2@ñK,V>fOE)]5.q=?GeFU9AT"-33QB0n0;RM煇9\r Nٸ|Q0νLs#{7- x nUp}gVwb, m.%GǾ"\99|,w # u6.InsFh s 6;7ˊ[[|<FadnTbaCO91Vz8)"7  r5ejtRMǞj7#"BRrɻe[8ú0NT_%N@I?IDNl#Wc[ ddYCCY_|3C.;IT݀h>B}L~yGT:2?Fi "ƌ"FF'Qq9Ӄ.Ëu#twDZ]?~IΑJe$_'dpƒs"\AliJ湰3Q% O8DMʹ,A8" UDž'ǰ!2IAI˜]I¡Db[CS @Tojt7iԓ&) b}(>C9GZ\/{XuOp?{)fnL;5:M^!iz`l^Ciz2S<GBxX ,XyU΍щa2;4G%G1Ec9ڐ1GeK]v[.UKrmхT2bS TUUQf&ߚ(`ldk*-qԗQn4I]XnI*.e])}zQ 5GQX6d QC*(y1ې%6 C**Ȥ9ꂕ!`%ibqɅL};*w׌CژJjsuU;Oǣj dS$KfcvoyfyK iTq\nKOB*R5Y7߁ӁY 3U(^MytRuTǨlv;[aI\ƲO,\r 4m#61w">ylwI>N}Rkq!XHᔀrh)ɨsRKx0Jt%|k.yC2I^~LDnJ pk}TD,.گ'o3; /Q&ː^mE7$6!KG7Dqc@c)*/vߡU:)MZ]]v/^v~AKa"nf*זQcfC[3MU,_O R_ejoW$/4̛rQLC_CND^y8{x}mDF"͘vT/YeϫwĈ;xO|L;4'QӭủZ66ݥD6*F٪(g)fW,,5.˺7O=tg'|PڱW9"\cȤW7e=ư} [ja6Sq9&gNNBY8 N| UxZ]T[Uj ˼7+d 0VmBeFGڠѭZ-?6 tSv7Cp4KSve-UwZ4qPY[U)b6(Yn$jG7?_;IrL;'̦|U-XtsS1m`L89 ykԔ<Ph w/v*6vg}- j 'L}EՏ&Mm#TI:Jt+P75KM2-0x3h$Mk]K߰}p ;E:Q- WR6 AqLHnzy{M='Mi1؜jolDx/}%A)ӱ< [ u}Y.xaXue+wC6v\y[i_Z<: C;y"0׵]5nFH*~Rŭ [(E,UbֳPk+6 /2LVp|?^AL{Fn}dxZ7S@pA^ '=w/: ሡ\P'W@tȂ4-!Ԧ> T/-Šv1ĥej& #XU]٫?⎾gӚQ:.. ttc767vi"̪уKXR堽|}sǑ2Ԃp8ωLeiXf`0tT?6D1h llOT] lqAѵѫr=9_Fr.]z 6wWⶬJ*=E"76fw *pܓr!Ȕ/;530U23E#*ւn(s; *%I,t+gZk蕄Ek%,_1GyQJA7`*Zy/O Vp3Y0xڴ( U}ҫVnGn߄|hë]Z3H[3t-V"nc-)(}M?XߛOLAcoga=:i&c-SgΉ;2Zlye$e̳Š -> zOAk)`,G2(L^S@8@9YLI/7wG" z`S2twuW` #sǥaFlU> v'?Qe7Eygn2ƼS0 Ez3:6D 쉣fĶl" /:6S$+ySp g6 Ni͎_e3u-˷t/5 &ʚR0e.cᤚ6 =c"B!c!)U̻Li,Kruz8!$v)^B>y?{;0 wG8)z9{EcETKSˆW,1oWGBu8,,fg딺P,pXۂ[YZxƪ#+d,c4ֱgiA3BZ"+0K/4ʈc4ѤZ񸦂cLiDt |My05D"21q=$TH 'K]ٻEr$WJrz/-N;|}tv1ETWwI/d$@dF-Z8[D8"?l'''=>A`)=П,ۆcѠ+;x#X끷'*N@+da) 4Ex\X'1{^7,ݺD/C B( `di'?~ FAqbwc;^Dlƶ ~ WHV3<^Nnk(HZߠ %{swxZ6nVlhpO.& 4WҾ{ΜeǙ0&~Wu֤uYgSzvޗ /LdFT#ӷNS4O"HC+w%'[4!NC[yNJ8n)_&ٹ7uN>,WSVq21Dc-OWk'yܥ=BȕN"ZӂBKr%g^ Mulˮq%BOR%":竲Rҽ֊~#,K1 4Hv=Ũ2V1:_݊Qhjj#|ڐaQ'OEYY!~Ξzݍ!7b¸qQ2VlC=혛r8 YQg'2~N0VB]Upn)jՆ.U,]PI $M& C|!>THEJv wHzCp:L/!r"z˃$:AdYW",P|O#ƚdc6[MZ#],q%L^+*UprG\x [l:?Mv 86 n]? &j\Md?1bFJ1m zl!3e ` +,hKnEàCXrgDaG)G􌮨bF2<= O]}-ŠP|uMnuYnw` $o+m0/N#Qx 1a1Ƹu~~,NWk$i5C=`WbF cvxޮG;%<L A1`%rX!DM6 #ΧNE|Pހ|MCz퓰|y'! ZpdY)vE~~u\|`^J tiF N2pv C|e3:O~?D/`YpOxA eVxqSzA<*7P*oMK?삿8OJ9c{[/ii4q?M'K? ^ 4ǣ7<}_6A~,bG9_,ɢSLzo|[|_x7_,'`m~Sl^Fˇ<5>}{= Cd=]nǣr?W.?]+h:VX7ۋdoK:>ck$et?-f.*,rEń[&lZO!m${u8uwiX޽<5w%H'%ݽ5>{uG~573yEٞ~03}ꌛֲ^hmfJ j\ڜaϬ`h_)zk%k=|4r}sCnn2>u2^ y9^x/F_?ߍɜoɛ.o۟ݽk#Y?]VA/tu 5e[ww58ݴca&R8%ixө~H+u~ =,Ň{Г}pF{h?p L2H p-{iO^'ec6a4JYi'JK(hU:y*-~pal&-`FUUP" "#>F`M>ۦ[<xh|*~ЎUƃRH+4ޒ U֛R1r16[4rʆj+u RI0PE Wj<ݻcC| {6'4"@˃iq)rUp(uJHF})ݰۥ}Y%@pD@p MWmeƤDQ'$R"Ɛ K28+yӖj굚ר9b4YݍA0Vf AtJ'􌮨t03~ۼ%O׳N_#̎OPi^Oߐbi~E Z;_٢D݁!lH5ͷ#QaX;lA?+TfuwciMHrluZ>G\[dCTo%#I^?ghO΀Cړ%"chus0ǫzR#k"I0pLX|Y|f~nX@ 0BLۏfzxi2^ UAz勊D*&h|M!<+ר~ j01츠7wSY:;ew7p glkoxIRV6k!E`D K `ZFtdžYfxp 8*YwcW\rE)h 1u9ArTRiZq gPƷ`~XܛC7Po򟫚\QدgxÁX5,W Q ˇ<] 2AG0pqv& ui;ΗNmtdf D_ſ;1I61lB# ɘB#9(LXXDlE-d5C+ATDsFLw7ƺPvݕS@Ki+N9 [-&XwS]ݼ\S]kEk.Zwc+DD_O\8gۤ>]w);vD՚ߘt|SxLP^T8J8|ڿ0Ŵ-hbǗ]S '!50COq,l2 ?=C$7»>|tpU(ܿN(K/9͙y&^^VN1ruq-h$؏ZZ}Q[ɦ" ܣ4 [Jh x$D)-uR$؋q4! Y"-3JZUt\z;iZKqRΦrh@~(:; BE  ^{ya|UhM,5U;$>GS޻)P}X;BꞢ竨p2F"X (+KϭPҊM+*rͬ{@D.^D },^.JTDVKXXm8R86-1xnj1 wn>$[عΘw..FŴ]תBQeBk{<.1-1x%uߞE\d#ZcgTI_38kby^┌%#5>:x[sTx"f$HF% X]|[#x45oka}`{EYKK1G68Y1A9x-x}_X8CǴxv=+M0$`?2xD = ^R-^1u%-+V$GhsJe߃G<^oE^1\.ҡcZc™q-el P׼4G<>/p}fJsH^U8Я3^ʋ#W>Dsx[e֊|J*Q8N!안:0 yŹi:Ișu?sN{|xGjⴲ*eE6gz4ZVd'[moc@; aD&[3? ro' @3Q]E?)8X1-q.vIK)הC׫6,+TQU*vLA[H B]X(FyY8M8h$i Nh.h^H2E\,'91iw uh=vkߤ]ƥO|[5x;8*C煉xw1ڤ狔XbS~L@}tZ"?#؄~頒8f1C%ZL%Q$\\8.Wio;UeM6":V$:PCJOLaiÀ(HYJV`BGA6WvKi6V pߦ>Wf;hZIRP"k "K8E@ $pF}] 7WNY/<,W*!r|Ѽ]ߴbkξNP 7EhF\N4 L1uW|l=ec}`i,GU߯٘+B^9dDէx2:.YKU2;A2`0tP}' YǕ)=29ʢs9aaBpZY} 'T<`=azwvz͜-\}u;_W R7K+?7 MX|ަ ,[CRo@pKqm*KAEZs0[Vy(0J,w{Sg/ CuSQM/dGpݨxMV[ 7 o?+G}X>*c&{x\Y8_\zA;B/vr{gգ ukK@Q.av VTˎ+փUs U, E>4sxŢՔT8"zMb\u#%Un%g0 'yR->c`W.10+NJBa7z_10v}ko<+gzwłAvZ@\j#L.ac虙. OxAp_Vrԑ\ukV;+/SP1'b9&~? _W _~em玪Q*1\J1:\9U kmF1O"I&MmŲ۞!l-Lff{5-0>UYmQIs7sd_V/L> ^ S%f. ҧ[ЍʱR`*Uōj(_8a ܦpʂJ|ؕZW$,$ @d2xSU0 UɼX):|Sx.u 9#5F {ϔrg0':C*nD5tDȖՂ̐odPl{ Wa 0bRV=*CFs{Aj WcL` +4'^T#g8:u$] vQgM)N#C[\A,Zc^cVeŤ'TP_h$/GY()Mneʸ3 .bʿh/!xccM'dTʜ#ƜH0 oH䮝L'O@5/ݲiRw,*k-6 &R;\Qvs E~Bh XW;')/EȖ]:CYSQǥ02+V;DI=7< +GlC>C`O!'ۑtB;j>MJ^hu(=Ԓҭo>d'xI (%C 8=p slt~̴aSj:xU.[GnNnW*LʡMjQrX~4|&zZCEM"JqDH s f ~x 1% L%b4 SD į9M Sߏ-GE pU:ͳ}<ܧ &XW'6!r>K3n! ![&s>j6}$:xVNpnVvI]cie'i><%CCdkDN}3 Iav(6a[yHʇ )hz>ii%Hr0tb8h$˽秧%#ZX*g{蟇>I0zcO}SQy%~mWaҟ78r[d8VI4S<( YJ V7LtIPX7FfńnC. bKN;%.YXs¹LWU3@ zB a䶶mV#97S-pyԂrRL.дIa#il=gjvJ1Nu;4D Xc`ñ[(qAwnxVmRʐ7ʔTn:XIvuz&uQX99,>;YX~#?OZO'n' )nxaud跶qT g:Hj??P]nt\eX( ؆`FXxr4>KMJ!N麟t21d} zp@%Pj?>-ozGSRI:xVNÙS 9G Iy!͌ΣW`%w(4wբ\媙o5XD{Lhlۺ;څ:{#ӄG&PT7XI7 +y}{$%U6w(_99IIir2,]ڛƔ hH=wѤ@ӥSj.@3ڬk392z eãr2Ӟ8SR{bPT3 cR_O4"gS jp5i۝riOGaЌI8vٜ1ª`68mdִv:*tҦ& L kHڞOV;z.՟IhC"٠{߶tsݣv>HGaxM!<`r΋wPL}6М &ۑez򓑕"5L D@z-rMTmc+rifwT*g*-e|B! ֘>(ް=4 ( 2/1#|HxA;߬殊r\mO #JHQڌ\6.V簡]WiĐZzTd/jGҬOCx .vZDŽ?N!6)>BkחG􊗅OExsox)GOCxio>0eS(^){Kq@!0 8rD Y"O&eS(d^ʽ\>Bቖ<)78,| SUakU SBx@Oq4 k(dd9)DŽ?N!A+:eN#oiP]UN dǦ} Ns,GOEx󨲫^7Cb368kqzOx7k'۩muFC~W  沶ǵ2+Fo}3{VCJLaޕAo\ }|'uf}?_ެ&F?i?^/?<,߼w|E-_߯N,pwO&~#ԂR] ^/gS[0b?ה(K[ߗ>,c)>$>(̵祝5o~#C1DS&%Xp L߸C'lIlû_6w_l 6i/G~^"I84vcO+baL~j_k|M[@6~͵BY30PcOM_b)n/f ӹ[종%{oE ۋ J^t&~ngv3ehݛ d=ql>k֋L+ceVB?[bfk_Ļă0IH>E`X-iDt;S9 ?H÷i\6lOlW9ֹ|v)/$ϦW.o-}qIcI،{\_ؐ~+}v+uIVym#//qȴU矿bX2`tX\|?<PxN0.7AoweEND%y]4./z?(΢oߵDBE}W /kGť5'(5ݫG#|JOExps 2=}ҸeS(o>V#'P!<76g/ B!G)m'@!ּ(u}5c?q.讃AvB>xC]#$WƈךN|dk(j 9~!ǃiW_n2\_B82Z5!kMĒUAq /;j-Q>P[jY2fCytϯ fq #}@bNM턗E'@4"Y"> +aC*wWEtfD@_z:\̓C/0!eLXnj BTVԜȚגs(v"FkjZU OTkQӉNq^5n |r 0 j`]?,ݧfkWXqaFK̅Þ ˎ 7&3Uy<1{UEbM; lvcNr^Q}6S`gXc\'0,00Mc{,9IU~#ͪ{iGw:(*a& ni`cLQVKag.jr0 *"6B)+xkU8^Ymu.d>PaTLʐyՒm8WK;Vѥ{v\)_)+x{t-pU 彐 öZi.Cu$`J f(|hBߺX.`@ۺrDV 4U1 V5ժ6x[ 2FyZM |q}NkÖ%k4y-X(s5-wRa T{j5a>,`qYkk*+7DSޒ0~Qllme? _?x7&./hh%X?]Sv'9/.wo.֫7v1YE(o~yjކ&w\E${@a@|9~bzw5!-*AWF^e:pp ׈;lP%*j*h8TW|b}~yDm\ cygRj,=[C>k"xͼ)1Zڏgy2ϸ o[CjoʜdG<;^?bŎaѣ? ,#1.c}^>/ce2.e*>/Cb¨;/},~" Šc},~},~ Ӣ<{.n q9Rʊyq\p-b Jmӗz$cidɁxT^@Pa`M1g3MrN[ #T; Ռ7tBH~3w4b3^ +T@g#GNh܊}m=/ܿ\Ax(^{2ƓgO ;0#9xV@}8vf;3ΌcgƱ3ؙq8vf;YǮqcgƱ3ؙq83ؙJJe/h|}lRj{PEC8Yn _kf[6&M>]Cu~s"JD0.jc s#DMj}p H啔gc_q|q< ekyoX'\53rĨgZx6_! 4ܬm4Elzph#K$Ѣ!H۱s4%3s<9s8 P!|F<g!^2ƫ˅C]_ق3Zb4T8Q+FU\!e-kaQy =ViQxGؿҵkŽ TBApag'X0 HrȤ5 NRoQt$=)'sA:rqq8_: Nt_Hp|GmF'$d.2=&lλAT΢KRu:.3r6Fpe((evwj?^㪩O ciht u@ drnQbʦ}CWwN# l,/f~DԻ4΁:\j y{f#˒vc{͏u |aT RbےU/|Ԧo>fnMJ鍢7aqVpDN}Q9eP3FC6hCK(*CvˠIKk\ ~kt'f=׮=#Rkfw\kel,IҲȵbV1Jy& wkW@m ^L 06o#_]+:X$0*!@ac݃*$zӑ.tqBׯr~&c7ȳ|xhCP4iw8mxrҀXϯ{dz)U4Y`0r.>BO|yKb-tt_ ̸<u18cYCf{j #\2ωq*=18od PjmrZ[Ǜ6HǛ6~dsF <(χQ_|Ty;(Q?KLAbi!OĊFtbW5{qS.SSc Xd. 2I-AsYٵϻ/\X_\4!_uu> s<џKޠ6#)82qrm8J~XxϤJ0I(OF'y$97: XXJ"Ge%KBǏ;wm nU3hvMgoRe[KgZI7 P$lc1IzQYvt.d‰?,@'^=tķiv6FFjg̋Ǎ !K1^H>3`Y 9 ̒fiOB`tUgI>ZZZ$J&T[2fWtKn ɫua$&sg)c\YUM[zXњ\eVBvO _^wa-(>"?*C"72'\<>TP}@ajf6@a cKy&o/6 Ce7ôH":-Q[.bGU=Y3Cl*m6m[(2< Aw9XFp|ZC^~g=ŷӘ=mOhum8tl 7* U ٥3q3ϭ,&nK;.FIq]F!jS`4DYj {`ٞz,Q7m##3:hIݤAhKqU )|L4R*C ob)B,e@2uڒwzFsxZr{<أ aEP)[7.ͱg+4 iナ!U3m:_. ݖ^_~Kui$|M-7*#KHJ%KOpR]{T&t.t3J1 1n9ry|Y_xtOfW_ήrViC $/sb ^[vcځf Z"{2ht3.O~R9GwJ|fO(Yz=QS]Y S&eȥ3/$ 2?sTdMQxGR{ryHsDqc9Dv. Pi/|1{6MiizX ~_\P^֝ŐF%@<!Sk$ j4-{QUw2r."n3VaQ$;a/JYSOw7QWǿbpZ*Bn-]uA4t5r7 _xŞ9l9lӠaH^aaaaaa{=x [C.G]U<k`.oG9Wsq('SuOr?|b])ST'2*!VFvĵ}hB31u]:[$ CizuR]7R}C3pHNSW;C3soP{Wv!ۥPIOͅm-I%!7t%3\ofYY^!a0,}԰:)/e&yokpKnUFv:V560 dl$6!?]Ki@gŅʇ*bv<426]uً?~v7?={2sv? d.{߆LaZZ;3BI ڠyV J afS3wK?4-EɁqc%@Jq\^4T:^Oo7T5Vj& v1y*$׻)z5<vkհeVxp(׮WQ@fiaZu utϭuHmrCtCjV6 悊D&}I$s!g6y. *Es#,Jr +^[u7WPzΘWY.|O>T$1D~b!IV~Wsva04؂i*Vc]oÇ\o6bvAd.*!d{/_ǿq'ߟ(陪;%n.ZQ1\b*[Zt/!u8 ԋyqcPZ'i5xuNB \=k4͜Prc :Z,h|fm/ʭ̨sеv='6i&ńw oW.SJޝG?NR\f%~6Y76fDy{q/&( T ưY[87B [x@wB:o6**z키˯-83"#H7çJ`@(0jb Rm>)Exi闅aHBrc 2*E=gy`!)Pj| LW ?JH5E{Nه?d[x C1i 룘/Io6KT+ _Ymֻ7;Hy&ܲ6΍噊 $gpVEÎ+rτи{Xe+4U7.̨#R]bY0%ؗ-";BZWIohu+)ySU!ÀXJtAU#0KIyn]I!el!*DO3j1`Y?&[Sh8mAc?vgb1([{m8QdwT9#л#葌葌Y葌葌{葌֐~:v/#=#=#=#=#=#=#=&$GCB2z$G2z$G2z$G2z$G2z$G2z$G2z$G2z$G2z$#=V =#=Xj_B2z< 葌葌葌 Xj!^r>ݬCK#lmd])xPi s29[g5qVg19ZS~wPgJC?5IĉFyu;iOiA|&]h,OEb=x2Dv6XX6$vnȮewy\.=aZA#BSoW&(ͼ3Aza59'Nj)QE;1udH,*0)E->ç.VXϜrU¸ Oa}R8ٕiz{%^SvVi'W~ԧC.f^[G)A!P&NJRG.7QN,W ԛO)G=nn(rXtK`g ףݽ ,5bdI;K7j[x SM.zq&nVkJcmoѿz{5[Gw<#p=0t.NӗŇ=*oCr@]7EEitzä3"%\H˩V+dAW44Ci? 0MԖiT:akS5߀&\aYa ¡R(&]9)PҒ`KF. Ӻg )j04?[?om!UA)Q&`sFӜGaARK!O-eȐ9^`g"DŽ ƿK!A=͂{Ejc*x R2"s DHؠ A }P+':HwJYq* J3M 8P (O -xOйMÇzpe]GM}`d ӗN^yE4Qie{-Jd?_T0dYUEx>K8oqQPląjp:#__Z$WͥbR_Ti!~:b`)/ݔapy 믓!'2`U~x=PS(bW|`rx)V )"wbN`npg㿕@0! H ˥CR0!w=:?t(tCX\MfǞ+j?T h`f/?W=?ޝ^* > &<4~JDx&{T%P&c]Է0#X~?,8{ %'5uc=TOv&P~Xqhɽ ТJ-1fffpYayCWQbPDmvn;eda-ouE >BîX^vE WP1ELӊSc#M,Z+j&ye ZJ)b꽁uKE D"@%ч4gF[s[cR;ĬW逰 9HLuE%KfH$] v/]S>1Q1cC`Bp#&Un}_|qWſ;Exoo=zLI(>dKk,)QVUq}B|^#y؎r 8M(rneZGEdQ0A[R.k%!RHD#nGl7/h-Aׇkˣ)ߤqrqρj}5}@M~FB?ЛPre7J? c>ix`,(U2xi23Ńd̷ :ڪ6 ]{b'Զhdaec`iHŕ i 0XeX' !e&HsQ4-vN'|d'16/-pcĸe^sn`dA1#&9]H; `$X Ɏ#.vjyUv:& 8+bgǹfHu H%0joK9kZ~ӶBš]*$TɂĔ#"_AsT76DR]&jIC`#qK)4Fa:N!o’E*d+اu+1Yk?DZ]o1"B%w ikA6""$ՆBƬ(h4UYgoYq&/O3~bx: 67o{o2G<#|dy>2G<#|dӔy>"ey>2G<#|dy>2G<#|dy>2˷ ̞d̞32{Ff=%B=9et?r[J6DԈIKkw:[$,o8x׻34"^kLklkF("0L& &)QBp V a䐤2c&hTj4v +8v0)&8Opֺ:P7C< ًW ppfH!D(F 68ށ6^8&TQW3p."XVc2քx#rv)n@󻲊#i?;mZbdp UD(o"V[ձؿvp,^R1B0ƊEN#E sB"-5T[i5;˜͋:>P&ϐ8EQu/TCQϛ3 *ی`ChaAin`^Bd7]%7"JQXȖXR߭HUuX/b Um9(9&r6ȹM)Y:2,"1rbrX+B"IQ;¨yA{ip>\[JIK= h8׳L%%d6 8RTց^:ᾥb֭>҉#62srxCφAc+S`8ō!G('HTPTuoJ=2$ΤAfmMsѡY* 9m$0 H%K Ja2cЂ+۵EC%DhAVLZ"$"%gJ8_n Ҍa{&F·N kP<1}q4MABT 4̬YQ\1 A*!lhڱu֝zvpw0=w*|}(7 8:n1(ʵXmcӆfAefYTOOE熺~dK;@"O"e fh J@rD#$CΠ $*3Ŋ:g'Q?zHc@Gъ$,/sptc8*d\e4pmv_+p[f Rpز0p)z=em."}K` X(_g"ɎoyfH)Gh`{ FS5|$8b&ibdTRyѝz,XF/Iȶڡ:-A/icufplo]yX*bQ0] V1Od is'sU&8 $dLp N2I&8'$< !$;7$dLp N2IFZ{@ZRi3IXkzzzgE:jAh(],*S~dʏ }>Z#2GȔ#S~dʏ\ʖ)?fʏg#S~dʏL)?2GȔOʔ#S~dʏL-S~dʏL)?2GȔ9٤E H%WNJg&ix`,(U2x )`)r)A" \!M*+}*8a0R!e&Hsݦ葻H|+rG,_owݹ{cq5978q{V!#A*p$5ٍh0r5wTmd%`a" sHlNiZEw(i*}!p*PL7QCra6xGz#PTIq ψ(r~4v]FI*6RS-y"4"'^q<;Z`V 1c`SjppfH Q/%m Nsq8HTF؄ A2 ƨU1kQRs\K "kYI]ug`kc?xV_Ex\zE`Vc2u$քx#rM 7$>(FtB_8. @:L*GvEјI&rm5ߨ@˭re貎YkG LW !c"Lk9!d|ֱұIpz˺]WYm}p(爳C͸(׎*- d0Kp ?KCb1R4JkMH\#98AcB#Zf!s`R oPgt~fY^Xv {Ejc*x ֩KEA V:N;+zԑ Hit(fEwZ& (B@$:w_}@p~@GWVʑ3N@l.;+PL?Zt"ʫ#CV\OrDG o< sDOZ0'Je%cIY >` y4+RMqgkx%ϔp޺lA(Fv ~9G3zZ\n\ס"xs)G1EDìP'!IKR|10ko顤 1@g')3-Y ,bOW{}{BۺG;+9)5MץV uΥS3i/ E/ t9n_%P0 $+:mqkrTrز LH߽ORU<2>{;-|)Ʀ2$Y}Y:trr<_&* > =)hrC=$«W|-$LޯT}M<+itQ?Œk{O/.g6`P/̅AC=T(Qr9RO?rra!%.ֶ-]koכi PDF YL;h8Ev4Y*AwV׍55Riʐ I s?&.r +)"~{)XR/ ׅ?8o޿:}ӫ7/۟޿x=&鏧v? ̂K뺠].b9_Niݴ"ETMK]MDC^-WS9}[+ٟg ':vM t%s=_ tӉ՟hr2! 0v@`]\U: txK ޡ:fKpX`Vzi2}~c~wYYT`#)E<[釟Gm;u2R1QyLN QocDa .6;uYã^Ѥ6_D<#RmBq/ MZauw2.$"G/*BuruP| (id`Ls[ 0 ‚ͣ<yX*n&;H10 .fL[]F‰uXTxk%tH_͇ IURHzƌƁZ1bbL#,8 3ǎ4k% ~'W[']p8\=k/|(D#ik3{.y+T3t0txwq1VZa^}&wJRblژ+H3oKA#.W[jCX3((dQ S߽?^_AOXi>'iE{)C9>n_ƞͥWEU,&԰ Lڪk_~Yru9V./v 3թqRrM o:VD|iƻci5 &Ugf|)/JrZ9Xw-nazucYUNӳ4AfUK/f\lbd|t~YNC+\u%JcM}2Jg;F;rˮv w{CkVkvVTyIR)ZRcţ s_¾,] KT%7_Wv}'aFR{R\AL^SͲ5輶rηVr{YqRrd9S/f %Y7 7qR *4UFFѯ#ЛI/_:EIwʴ-BZ*NHq=ꍠzӀ7an|ʬƸA)'?W6-Ujp(8 #ž 1CݫgRSۼ+M%Fv*unbF)& iݧ16=܋'><4& x&ZPI)LjcI&zGk0m^ёI&H2Jn3ɉO=jmM3zIni ᛑYݠSoTZQ]T)l _V Sϳo)ȎXF$ױO6q7߯zG=l c18T۶bŕҨ@Ɔ5kOn>ڱ_܊h$WugW a.4=WN>>~)Ęp< hԟN/wÔ]Kvkt.BFV6Sj] a'iU:ɲV:RV ~ j (&Y0^tKWuP u ̼@l%` }~L+X* &ԁj3a>qH?`v.\ Fvߧ6dZs 0Nuho3?16O9wuU<42'iӃg\_돪UcQ'4̿[5Evn1]|[SpZ'ŝ@鯯r}0\dt~??itaH'.wy' ,2mHo^,Ոbe[vYYH^.Vvz*[>DKr瀧&!iڦ˗GEmmq6 ) 8~R Y<.b;nk-lQQ2E u_"R_pty'Q^]]ڦo:~x^6Ϻq<nU&ro}wwPG%V(U2xi2%lGLU\ȷK!_>z+mj͒S޺mWvW4XE%Rq%B:UVUp`CM xG%#\_ޮ~l&>nƒ.HNo/Ki{%|a) ^h;NJĴ^I~{Fn /=U#@[ǻ9lr*!qpHH9i /$RE˺ A4ĠɠNhs'uw^x>{W@eNg|ްtξ9}ovXӷI ꌛT&#-?KVWEvSymr*̤ArxZ8lҸKl`0T VD HX7yѰL߽.`mvXf_coٛG4lp=d'0-!$fjţ(BJ傴.E%Nf:C*?L.j<삺Ȧtě qT s'rB} !K Q\ v@jk၂((h~.tZQ}5RLP=VJ]S=unDnlА⌐yze)O+uN:'ճEqd|儶zxnSX !XZU0P%h%^pxLNE[r?.-._hE5賏o5S"LCѲVxji(c&],uCt^>F>dNx]!鲗&'vEADBԮ(fM2YY!P$#ip"acclk$\Dԫ;?>o eS鴑3V"sFƴD,OٖF2In` }&;ЯVf$S)If๢U#gK9kè>di`Ω̰ \KyrTp:جDƭ K:)_ zo#c, #ȸrs88w(q~6AA4!:U j頳:;i)3G[7n }>w݈.#+eύ (G Q9Ymt2R'B8(I^wT'nDuuJݝq0!FlӴz8%1uԨmF;EJGըꣷ(z25%X#HB:]h઴f Y$́  *XdD$GJp^Q21Y4Yx'+º4+PX0ҟis.JYwn/WO#S/kK>ƮU$U}k?N/74+4gfOIjŻhNM_?ysB,eލ9-:^܌g앟\Mޮp/\s>y?;k("Vr5m =w50iY#u#]lFlfE o,HcZE+XVb'c/&gQ$7iek!4봐2peojB+Gh}K7sSM<˽;oz᧷?S͇o?xӻ\oûЮ;Ziz;]@FZCx-F9?<7F|m?,V|ӛa? t1h{Sw=7Mb~hmul U!0&pێkrF6.-+ncQ=ioZqF_?t0]/?I #Y1RK79/-dO%oۨ$6Lĸ Ix 1XP1E39 }g;Ǘ&tuADzdW 낓ڨ&6AidGUS!TCW/$^1GQ>֤K [&J=])R -٥f<:<m6gW}V!KF9$)׍W6Y(t@_T יsT$H!:ZB28˓6BL$9s&k%-Y2Ǯzw9W}t3OSд!4֧ҜgR}l%ݷ;w ~M&+oSqO2SA'PE+!E2M9- 7:ǎ <ֿ\~^{ugE"<8'Q=Ĵ3rv**5i%41eW>ep  679ZHAKz.Zfp.ǣA?|^ao/y@KMmZX uSFdte#\|tТƘ+GD8 %i֕V`yex̀V"E VGy od'ďx C*Q !Q!8ÓSD3ZY2|N*y]I,"uI SMM@ARaJKFj5'm%P:&"ɑ">JACb:$FhKdւ1.1sي]V[U!qT>Z5o[1 HF]fJ dXj9E+!Ydd+GZeMRVz&)6_[(X(xO=T~GWWsrfFhF|2Q<6$tJ>ž;2MfU@fz97Y$рdN%+.ed4T(zy hĽs |4ĸY ୍(cVy1 v^ةhQ,ԍ@lo[0n z 'T7ګ/ P*etvc6 $'/k4gdU.Pj%QhhЦ=|w$JHe@GXDXl奔V=Uu Q˺8B:X ȳȣ`\FXNv JZ92#`㹂wATuFE/GU]G ? Z%vd,sN쟭7_;"~j!@D0he"} `g ^"2-raHEOzNEh%kXb4LORPkL<>\DD" i0 0r c$ChX^65C/ٷͣrBQerRXV yIDJ3Ѻ'r`rvBA{dSẅ́8s/n.:Lhϟ`}xwd?'blAMll(%Wks,]}㏯zׯZ3y}q5t?/B'ƒHށb J%A `[Gऴm_ ">򓷭S7EoK`osW+WDZߵ Acʎ~o(.C>ko](6(ӎ>FNyX >1Ca 묿'w.jѥYfN[ViXbS(},l KXpxLNE[ٕZKJ&ڡh&/d,)9kv2D%cCdO $T=]tr21h x Q6Id}i̘E9 ?djYt1*sXt%O~#4}Un[)WޠR3-!$&Rmi!#JNcB ~$6j.iˋNNҦtOQ, @EG䅈h)E0}1Y-̈́Ǻ;cIRh81r LdVpgJ{G_%}d.`ѧ,yM/VSeKhD= FuSU9bHETB@ԳkK~^TX`Z1`412*y<ќzXy&},јU;Tڡ m6Vg&vw|$[h;v "5Lof_Y=]շPD&9s(,)͉l,דrWL2@ų8om?~0'c&}q?-42)CZژ -4XE%Rq%B:UVUp`CM xG%#-}Z3/gyZxR2q2CmzZe&X1; [!)UnaMIXR7-3-Ǹ.gqJ128 RYnu '1bePJzW6MÚy> 3A05Bʝ7\ Ø8P=t:FYLu:qZ+I5/lSAj(V'=vcBSZXf>{([mw z^UqpakQ&H̴,Mó58J\_J=@hQD{mU w[_бAR`E؋9EnmI!87Sd*6iPim ^`ERitzä3"%E!}=(h#FxaR[EX)ırZ+ ,\)bFw\N]`0fU׸hzx6vyB趰nz~\ˆ1z:,?M{||}&5h%nda[А}'cu7mZi !nK*&5vv3N84)<=1:Ȩ]kHE:,@[*=`al4][.  v_uZ3uT1c}>v"EΥ CP X\'BQ~ '!),dg|kX!p-L<$Ʈx8ARCg'5=`Ņj֚LjQ.Eat$3~2΋W-SːR'W^0^[-} %!$>pcrCY݆VkG_"t뻌ŰY_IWi;/E ?r6KR+r}V$:7u1K6ہDU0%]WF컇J;-y"o%3.g,T 1Hˑl#<>l@5G!"%mKem*=P Ҧi[.crN_Vs=NXWm @!DBfI^giQ,FigЅou,Ll<-!Ucw!7e}<жQ~=s&S'; &5:RL݉pe''yF$Ltn(85iatH}|T.?A<5-&L AGN܄_oa-<4_BeO揧2\7ߕ_|-.|\g^r+e<圶T?v-P66K3]w0f_:SJ6U4z55|>ͺ|Ea:+mP0Q1aIqոx4g]dլU}T<ۛԑ41ƥos_qzX{jExSS|˩b5ɷ^|+Mrż[?OЎjo-_F?~0.gWדC_?JzG~+lP[NbrU}*^]҄(oUqoS_.uE)'j( GgҽO<-lĝrd =З`_<$y"2W:YTD'C-jm,8dJ2D{>ӒSOXݕWʐ8/ Glq/I' 74h3Nc"'}32~?=jPf;)ciw%N^c3[ xZyCچ F@M&-ά1`0{xf4l,[LC߰r޴"N3 =BU[& &" ZǩFX0L%iㅑJmcabib Ɩ۹Yg{8g4p1s*H ppfH!D(F 68!\6&4x41 +" a"f `( 4etA8Q?L>* Ц`vZbZ3"|`OD(o"V]zuu_YkG LiTbSuZ sB"-5T[P؛?{ȍ8~lp^`OrȒ#douqKVK-m[g035&YźHt&Hui2ݾ a뙯]g`r14 &?}n0M=1 wX~GT0a")Ḅo {@rIo)aS,Z-u7jυK8W8^Kx /%pDZKx%p8^Kx /%p8^Kx /PWKx /%pV <Պ( fQZ<->Z纺_MG,C*Y rH8+jwHGԇNϬ]:.udYv=8]Vhߢ֍u[AOSmta`$*Frp)$@=cF@1gz4‚S '}+X;ge`*wጵ6] rak)o`4IZnx0vIE>N o"ײH/ӸӤﲂ H۴>ÃU˓`WRûVNN =>6131w}])>ifz^m:áEp#޲ gkf-.8=9D΄xoR .0KZûsL(QmGs+:2,"1هREb?D h$Gb{z O%w NOWa>ڌv]0έ^a]ͩoEn%Dl'nPFZ%ɸ p:A_a􈍬(̆8L2qȱb` jRj:ߡq ~'W(`7vzwW7W{^{5ܼؖloMܔJ+!{a<*3 ҜWy]1YZPI-03Yܡ/%fTW_@aŴA.Rt#ꋕ`v8LUe@D}| PM5:+Dz Ov10wbNqS+\읧RJWԙwVEc5>k/lwcN n>h4y g:W|LE_V>LބJgKe7+jG㎇ʍ/&W~6nOdhItxlYr̵_b#ñqc*n p^pqnw1O㮷\@SDI/z9{aܾ_}gvK( mSn.=,zߵ،O%|ʨRzGI+s[0b72r*>~pB1EX~Nã'r1ԗ(Ѿ^>x/DS۷$$8Dj_X$! !VV=j^ϲ bUxŪ U9e*qXk?mO9x1SaQK{6LJ [b@"obiJJpp^.V@i6(҄HX¸-3BR3$IuA chMҨAFTaBKVyER-ݧG m*vwbw9W*Rqm,B2#҆SH걗LzJ9?ƹ+8ٕBRsCN-1{;fs0C1cЂ6#VՐEC%Dha#DHD"# FT8G"W BJ(2 ϲ #|C\u4ٷQSjW.,+)"Ü GV 6E شaYYa,QRPwoĒ>XH1xx1mZYI`)P+rQ9'sҚy/ 9OX-VԹtkF2(̒buft5vvMIlF)`}|&z""5HUW 4F.)\ QGS4*EcΊtv繕Y*,s2J*o1ƀRUX[SY8h{mߏ>Lv)mXG~T};5+'9Ξٮ?+M" \!M*+=Hq @"hY szT !fD68R$#Z{1[ #%Fm8ݼphtTǪ dƜ^L`l|pfhCXAaKt8c0#=E 7<P+|նgAz{ҡr,7: \܌aa˯mZZx2˱j>IaY[EH(򕖜UN)IfxI瀤vNȩ_i8Oak9ݶ׶~A_ƓqE=;@>ہ #^B4Qad>ÃU˓nOl iP #PTCn@7T F׃T7a^}ڊ-*m|Dcਏvn.VWAYeDY]][1M/ t5mAZo4ٶFZ(+^05@ ⇰Xf0O8sF ٱ1Ra*dl+V~2p6oDk~Qa_9R.boe&#QHY5H0fIDk1h`HGC2{ ӧW׭|{o:Ӆm}ȋ:a$ɧRILwjtݻ>[~ۏޡ&K{S_ i{*]բ.wns8 qjzrpm[=?7>*-dsW;u[@KeF~;fǫ5W_5=ەst@g6p?aɂ<A6F <qX_G$1>84>`k4kne},p[ﺗϻ#ӑFUɚ35'E\ŜKia <* ?s~]`$4%`XXHU]ogr(Ց6_۪%aGݬ}_+2 hd)G J\B:`^e.D!E-8,r!c0)fe  D9>0Gd198셉XcZadPg($^2#YobA% 9Ij$fXrXѲpZi}rEsXLѱ"L?mZhH7>^k9.q'\rffDUw:gw|I[3~V7Ƴ—/k]8sg,vu.WWoX)^) C'^MԲjj'C:JC{xD(Iqv^d)R=I-xte0n__UWWٳ03؅=a>mΜū癮6td:sٰ!-]iW>y- )Y'ugwcel&nI=̟38zJ eu*ZJa[Hfb@P(jVncL ѿ_~TA&lx]v(U^Ewž):'q x` $H O?l"3bRb*SOs~ʋ:W8YN]?ka@$ bb3aKULG,RibQuIB I/薢㍇sL&4AnrϥlV+W^|y+sR3{ǼsJ\;)za$A6"ĢAJ4(I! G0o]pPYΦv{Wȑ /c["3@?x=`_!)Ѧ(YbFI:JĒD *"2e+íS / s H! ($I4!k\*X֕Nـ2øb*@dƑ RVIQFBV+E=d`Ӆ62pX حv뼀:/`;$ѱu glc*KDNE!{mW"qmtcqB|YOQ&LXqc&S!"mXjD:!"ID+"G K7ū@1 vq%" -rdWJ!!$܈52@ty_ V@bNF1Y/țqB޶sSuY mw]u>ҍ81 s7DBhY?}]zh1pt5cLt 0hh1_+I{|}y,C+R?'mqc:bN3)7d[e!◿&/D[5v /$)`%BA:$ AIgu*y("9lk}=7PS4NcyE^ƔnqN,z7w[kzUOF۫UwM#oZ"둖lWQQy.[rl\ѐ4iIy-$$8 nFT%uMjpߏ\"mN# D\#D>XDF gXb UfB R(eUTv 9wHL:4ѤyDL@N+lu&EJ$]uS̛) ЧX#Z^qᄭoSD8˒)c4(Yb\ZCd4 (2(sШd+4Dj3Yϕplbʑ 69Kx΋g'2h1 8G4{G6L<9>i7k5Z9|63t:GU%e2?G)`RdL15 jeX7b#A:=/XuvލWYhcJnBAϏ~WЈa]QG$J'ZKlL@HQ-MH=k.uii7|ydz`3ce4J/*}G!OB4lIFY/Os|gǀmK6:g!tVԂAl `\`hsաwl.[v\Mr)8˔(0gž0Ƃʬ)+sNoYctV&'XOȣ$Vr8Ռ!ծYT_Au6r#lkBJ%v1Biڊ!HYR"lL M)[3db~eeeyy =~o4P\``ZkgW1f`|DIaɋp(,UhB]  l"hcX\} +ʡ@fnefƎ c$c1°Je]x֡QJJ\E0:GgLEAƶHƎ)==B"b=Of4.U n iOf@&5gQhɂJL7-j{}rpzF<~fkNZ[W vTmIu&SlBxJͳ݇хgZ|{4z 3?pݿW~j/|z*86s>e2۟Vsڮզ%o1Bts%ׯtе^fE //,Y3M[U[V p:$|䦇sU%n^u]v=k]\Hfb2Rfz2rG%X{S+֞zl<_>~oCo?~]p{:c Pbðӵ6¿6ـ?uQA_Kz˥55vϧ^u}>rǺ׈<E™mmH_O?}?Ƴà -Nӹdx\AWU<mr=I[])oٮAnU2 }"WcƯ7H1[Sa)#)ioZq F1Uv}SLT1!ENC`qJYm!Y c؀)eHHLAt{r*Qqc))e06NcGGiE6FH`؍9څ1{d9C "doR1>h:Fbj8k~]طn50O\"zeΕ9wAM$(ed`?YE 8[=MAen+ᄖE!Æ|{=$bAZ +4l!(P1rsP}5[XI+ꯁ&R>sh1RH@gc&t)L҂ ? \@RpYۅt[-i49V >Ӵ3QMG Z3]i&rg/W?71=84uEA:9nm +ZNS7~a8Ĕ4}vf\3^AsVlְ{"gkޡ^w14มuKhR\M#C _@Suř5mcN߇ѽfcͫpcO*QyRX9W=f:׀no@r?'gV퀴ii4JN'|ߏ~En&!b$ИA[zmKtO=He=&9ӥdUб*P R'a.N zP`gzɍ_K>l&v A^2jkG4jyf<[jݬde6 M|U,18N Hh!Ka`:^쌜kt٬h.x+䮎ŵ$Y 9+Nyky^#U[:2r%7eb&}f87{[ûufp3;ȝU#)֙phl3u. )clhu5=L/;gZ"hzx^;Z\z^i&ю,|Ϗͅ ;qfvq_=CڕޑO`񛦇nIYyΤw0o/&6S_܊o~"\ȋ}sf.bV;B鳓\vF|0đ8"e$yÜU(.%o]?lA\.MīI* NV1~o{x{u|4Rc`V\Uq ~zoEiɧӭ-x0F쟬2(х$" }:YX'5E $,oD:RX.Ռh>,]Xgqfrɚ*8uHPBd 6*I$2m)D^q(utߧ9RhvauW?k`0:WӢfgph8P 3(Y/0N *Z$MF&Jw6¢}ET;- ƔpE"F[ B IFemv/Rd+1 RqQpFL8MY\p^($E=YpBBZEҊI;h ;Pt"B/l oY?ʾ+/ÕM+ص㨶ߜƿXQF&4sO%N u^MPro$Z3-vYGNI7ūyW˫gjHVrRI۔ 'Cmtxtϱdq| yJ 1!eLX+p*)c'2$90qh})bfk`9Ѧ:'ZiW}> {4ߓ{>l7  ̝%ڸ>?3Oע2!XH46qS>#@jFJ8U$JfW\ Xetsfz;SRge 慹zmvO>!uC_yCDrY#!Dchښ"qI{SD)Ĵ$;)4UǼpF`DˀpYhg!yqN10)NYqʊSV8e)+NYqʊSV┝)˔M%]l( & l( & l( ^#l(^MQ6AMeDQ6A ΧauYP LJW^uuFɄNh1ܹrQrjj>|88y)5PQَJ`{oy/(wʊ:Z,E1O1AS 98;lq1q+2ĕ#ȁCr/Ll;$@xBQ)([Mǘ2€RFXBTVtȠB)bc$ƙsgܝ(C(.(CR08e3rspI9I\!h"gBaP)ʼG^ G(=`_PچP0yñZ`#':Ynu5\`Z%k1$3b1Pu7EIƜ"`*5H RiN)!D$\c^*lx >Pddl} +C~L *ӄ%fMpoG?cLX!GC,735DXVwjh7:O(x"xAP&T&Z}dI̍$>8%%4 1 OT;v|~NWbЖײÂ,É:O(1JqXHhQKBE։'KdTH,A[q& hHA,Kځ =TgqIL(3uVO.M 1O.P)9$tPMB@C` eaAje60Iޗb bBQ߀)9'+ԭ X=NnC-fP!ꋩ?Wm bo,".9~a4ћ},B\.8퍾 &1IMsr,  _~14wS 5yGxy  g2ǿ*0=~A4898_\)aVUqj J" k~kYG8:N(&?iëp'D~Xί#b0~#u]olkE2%՚.7K!8<36 ܅?  囘Ϸ`+%9g͚0#[5K}»UpvoagsrnWEjnuwvA1niu$.t6 mfE 7Ye !G̳-h8k&ZٿOw=lQ)=jm=kv)$UmG@^_ ͒7XݩzGx߳?o/GBw}/?4z0 m$d|vК񚡙 TO=*|qYO¶؇o~x?yii5+gЕ0B 󊏟wQSMdA р!?ո՘>m#U$O2BzSwa؀w?@ v?-9qG?k+<6"`п'u4(gJo۩KSIRʜ!>1>RDyc>&K N \=wtϨ!y<4 \Lt~RXo"( ~uN:=u\H눸)(I*P,BUqz;~"ڭF:㉵>8,Vr[]A$D,E C/1`ɷ:M#ll^ (j4M:4ƃ۫i8C/ UX"E%(p%*:u)zgX<8V]ٴKuH/ %W+ [C"s&AS"'47H x,ePajX$m߯/n.lBu.ȯ`B!EiZg7M}۫? G_-&Ӿ- q s;&$î}8Urbc}uq[NJ,1ゥWUTnOj,|z'zL< }{5)jqJ, *Nѷ`hyɼ#ƘC`"$26$)Wi:|"q2 񘻠 6KsQq ygLX'-u kaW:.y0Z1kG7ii ܾBwq̋kR#p@\ a윱xJR2=גQ1U,@=Pkő})v(Y"3/.mR$vㄹ- N8&QMfIx3rL(X#NN$a9uɏSרjZ PkꮚݛTwmH<" w `pło'faKe-)6Lc0Md=UOɪ>3Ʋ,N2I]Fv?=wbTl3Zϖw?yЛ6{vݣ9<-gh9[i5_=7+P'oܭww/E??{U#7zcߍjnz.Ñjziv9~/)fCm͟)6o!/Cp:8勰ŬO!9ǒS3 f2co%4OKiGHȗ#O 'g&ʇP &79ц*E n aۍq}_ScKq[Fi4^eW=&~O12H{M#zä{NK }>SFr *T(BYi#iɑYCGثVr}UUEa:9EnUKl3*]H:Hi6]VÄug-/K/tyeY8==|zl0ӕ a!+cTZ5-'KRf3WE|U y[S"aL^ !U ):# 4Tl҅\D(,|miD,mz % cƼBX&bHK_-KTm@9&G+Y"ց[QF-'MHH{{Pt*<_+]/x7f9esdT&{WgTBYPR"`aT&~/F8O5 O& Zmr\9sʶm>@LQZA^pd]+"K6KFǖ \B^Jܑe-d 37r/)Ry央RGnL>"7)sW929oʣ G.D˜'Eޒ*Es[XK-βnQ  ݐ;#oΎI@UdrE2{ 8'Iಌαxf*r_D V E.QBUjՄ ڦq$\u6-םqMqВx\/%e$l*IC0})&kqx*D@"FL3M 뷢+C}~f,j.Qde\FfYK !H|4d MjuZt8i+KEj''L-} ޙ'Cmdxpt1/*{7x#y)&>&)!IS͖v6謒460V˘oEG8Mh"ZŐrv/^ CǝwSe?ƟsĖԧԩjJi&HY+; ѦгfӔAJa6c'mXn秧?ϐNg>4k<{C]eWۇV 0}ˏ 1ǣEgmn䏗 vE*l-iH1Jjb.^U*oU UE,l[ UnNU"XIIjHuoq/~?9:^3D6ýM_Kގ%fÖMp"q#)gRE&Y+(ieMP@IV\dޢ04?d xy6 =-_j\Vh;:uv=ҟ:yb'dރ 0[\?3&P|gP. MJ:>{p;͕\jo| Izm0jteʽ$+>Yntti箬yv]|%;f-"=\P{V bU'i|}ld* ~@>曇M> '-1ʩ7xX||o֝TiY7!z}=}FnCXu7Ţok\/&-u/^_t-6x9<;ǽVݦ*X- ?qGV.rm ]m?er֔hУ-2ǹ}b"_%s\1]t}O;Z+ ]U .gt>8==[f}5/ (i{./˧Amd@-E!-Ev p0b^Oί_n$*8yի;bUYL`~5ڰ8)=HWdV5>;ܜ/O*a (_ V,a Yw_5{aP/A'p8(@;VD_~]^>菖PlZ=Z>g.g%{/8_>Z+]͗$QvbG_ί7^ϯAGO[`{)q͒e !BlKqN-ڠ/4:g~&@ZupF6r]\݆ qΧ\|Dt ׃l VB׭+,X Z! &V+ "&ꬍ .@N| &$<²p!6CX '!!* "k!ۭV:i L.#@㱸6ij%p1\7Zq@r=Zفcuj+=xҝ ID HW+k38;m䘙Fl(%(mʑݞì}%Dڠaۉ%Ph۩H3ZuDMJC "CkW]bL\ H+ Fe*QUpD4JFVE `ઁԫB<̣e-+hTY*p+Ed$v&^,;Džs]8J+C'0vv XfBxrb 3vb.!fg@@zH rHg]"@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" @)fw 1(%2 ÝAI̓GAD "#Ȗb6D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$"2$!C˅~t$Э@#Eݎ\$Ti!-@@7^/-=z"" @h7@8;A$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@zH zH F b@K;H Ƙ;bs@ OHB_/NB$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D}=H+`ǯ?dCw?Դu]^VgW/{⤩pk.@ KZ

i`a p˽w5`Y8ICx)ﷷ@/ < E`U WFz!r7z?ۼ"R<*M@Bd).5y z> 7S N>C pϋC 2s0f+q[ W:/J.TL*X|%U*u[&jemXd* #Z* r}E"ww0'U9R,Y%* V^z ^R&yݍ-UMNw*ݙHP+|$WT#g1";`MvG]*Բ'1P種4di'~{^7߮Ź?ONRKt",*HUT8bosL@'INVYC-hSTwPg,ڸh@>GS "e _ChtF>g+.UT3gzR忼{co6/4u4/sSo=v8. V, Y.}Xa'ҌfRƈ*C.3 r͌nj9!s-G3T0mUjO̘$J*J-U/=s6(KbV)k_eoR& t"9iY)IWȜ#xd9Z\/n`BgSR?hp~~-ci77=Y< ?.;…T(yG7>F5c#Wo>({\kCr!'sXnq5a k21zygT̔M+Sш :(iDX \ ɂM_T)EUJgnwV65.eʮz_e2H[=Bo6w,ﵰwH-Y-4`(O:jN*'9DWyV zmPQyaKOK`g~fI|X[8Vjl<][wDwҠekǹ Z奩^\ 7.7_r/vm);g_}E+횬۬b\xi;4$E~mҼ ,V8Gig嵝eFߖg:YO٧LSn>ٳKҵok.l Z:cPHAdz;A[׾s۠T:2!(D2dl3dF,\>ec??暵wʳ[HRoT+j#!<:ED$(AvE4R'N]M09׻gA?|^QO(KVJ %}iFAdUo.]ǗKGߤXGZNx`A&'cH T;^:.NPV:#Y.Jӑ 4,hTR&Q*N@J)\=D4]G_#)OFzZTYjƨ e)ɘNFRN͉[f01JY yȺKDPae[E!2dlBD1d~'n.Ӂ\nNy[%!GKrKpBޝ>\w_y9"a-k!R 쳒pvW~7 ›6^OLO>Xup頯A[3d ƟEu-S-ɗ mQ9S.U0Qנa~brt9-(! Tމ jvsKv̙XÖl/^Ss4vrÿs>ozjs uoB|yq7Yx 5N&$b!21Էo`c=~ VT }?;m}?ERNgW0+rڊSM wq߯3uouW%yey'Kwv>'ɒ@Ѻd!$-L3hh ["\* 1Nw{: >J/6V FZbs0G1R'[C&@- EQd]9\6 re<6[@Tw[#'~ta ,!S(0.H.?{0:YVILX#u,ӑ5 Qgm[A}4>ݜn 57贳6>/,2.=o +u ~hϟIrXi]}SNMmDW>u \oKf{]M̯iܒ YG']r ]λ}j (RօrrV'aM&>OOVkFpVS) ;U^@Iɕ3LH?N :.\&-f痑)j[}H{gV$Y}9kZLOV?R;[\Fw_}5ψ&t9؊0!i|t֣];5FN\|z2h9J  bNݧ0Ov9+"`5tt↋KAHHM# ~ӶaX0+bxyUfy pT(2Œ,^βQ<%Fm{2RH&Ԕ2Ҝhh\'rl}5Kz~j*t)> Z zz5'MYlfdi5cSq؋ 構5U?\ Śpkb#ЖVs®cK;{- }oؖf*ʙ"eP3FCu6iR)B Ǡ#s\բ)fH"7.Ga!КYWFF nb+o}`^HI2GS &$M϶dVuHgYA" &P&w$7޻ꇾa>i_mбJx]W[9O1S:00tں6g\ClM@n6U]iX`OB&,x 8l-U4xjCICaJ'C߻SvEU\ǻx+uݪn'NiwAZpB3:M?0ֿn?:]9pujf!eYg=?5>yW爞7Z$1_u=cy<.Pw;J{q{צ~핫ۼ|'> ;4++6ڜH,ƷKF3kDV,3,QJi+鼮,]JjhʒLKv|=KC97y)f^'qÀ'Kp !YIR* =Jˁ׏K1W&Wsce8>^ri^g!9mb`/&kr$DA ee o4>T`Hj `9N`.0 rroDeN6 uՄVdl﾿P˘vJ~;.#h-J/R>RL+Gm7Ip1#P^u$~^`g^@aI88f@!b$ L> 'H !B=rɃ4&žm} !!: $S, A9t0ILںֱb.H+H+H;'ҮagE\ؠ}KŅ-`E+'[}$Y%0XHt`^iee0s.U/OM˱8q,B) (wP獥LE뽌EN28ld-^TŀDևHZJ588/ LD+T4hէGXD4mb``ؙUr 爇CTQgQ2aPF7I駩mBÀ_U9B#FkHT2A`6D Qy#5g mTj? r!V!FqR>9ؐFKb ހYBpf^6@p Z03beiP4TJF 9`R.D$lԈ H!Uqm8GltŐz[= R>VSXE9CA2nm r-1VXi` rB稏-dy"6*R& 1x1mZYIX~TN0hjլ:iXS-+\: ǭN!dP<y: aS"18FA&>0dGikk%cKPLaB ![ C=q gt*[ou-ߎ-ׄ}P|LT[F*L8B,ܡm4|k /0- aIiˊX2(ħe?M'YC[.IcNeMr}6RU߮b:* J* 0pUR Ejlc\S>Msor* (+QM#R&PlBĜs[[c*;ĬW逰rVk/;^sasW5+]4&c嶧d_4' ҙA.zхOZ6了cY]h1'#?5sb05AUN O;q1Ǫn$i/9xQ[EH(򕖜UN)Ifx!IC0!zUX35;+ګiP\KjfS>[tjo(-BJ#I(6?&9NkR+UR[1EAZaLt ix_euҵ+bl2ҵl Ӻ 03|C1lI *߹ 1*Ҙx穔53أF5md.VMbIXN OqjM޸W-<6bzS\rxvRwx'2&rԝ0bQG"`"RSFDD b FрG"=K7l%;Ug6u[NV}wߦ7n'|i7ŦRq[ n nF,Yydcu"k/%%DJ,sV.]= 7)s3,exzamhAz#Jr칩﫾{]p9\6 d``^;+\\X gL#PzhP i 4LAf9|\t,} v{ =gv[g#)Vwəhڕ{&+z!tcu>wh6|wrzA5<:uugޮǔSfZ1gqWN^ r¾Ne_14Nd{ Ţ|ʹqaoq?OʖnӼx?j2ȟg eS%.E0&"XHL,-gE0&"X`bL,eT`bL,E0&"X`bL,%d B@VȊYQ + dE(];IÀ5fW#'KI25v94J'bE`bY"X|`bL,eԚ"XxTL,e"X :Bhi}17jowfSADPW}eBj#CH$ wy>'Y i8YAR4FOR P D|J0UZ*E#V:YȚ!C0!B*Caj3ᘱIDk1hhH3gadΑdoMF_1_-жzY>SDn6PUqm_mL ]/VMٱp`=AZ[[>3.+M?0_޹ZWm5+c<8ϪnMﳞ稛r=,=Wט_qO۷؛}1( |w'}mMMyl3IC*;V+6\`NعƷKF3kDxSWw+(F4t^WDVB%,md?Ò_ϒcu6>uba7 xBa'Yi9q4v敏5[UTap3Ϲ^\!vHN۫.О.HS`v"RQj9A 1(xAICN!D:n4FYIŌWyu1$5AZX,Q7/ qf7T7aaR"r5#.<).wpj?tj{ǴSo{xnd^9P2zEG xS\-&x#3P7sXɁp9"G ]\oj~Rv#wquGX_AeflZh|U42a"P fh/#p{t$LEt!"'U|/~Dȅ4*l w4H rb-^0GN19lC``a u ɀ(QHLd$& YobA% 9Ij$fXr`l` Hjsͮbʲ*=s/J`L$s 9ѰTvUW1Mep;O9'o Nr1 H4noClUw ̞ԕ\}9I~}ϗ&rK*Ljd<)V3kʈhA #(H2&"-bjܛlj|]-wo47}1rɴbkY\~c.. _VpSp3J`"{&CDTb)RRJnbi1g;J`ͮiId c5[GZܦ^8|mxDP{nj~?p?T\ .|ʺ tW7V;ic`-o>6z;)ԫBjn<28~m05zY$-Qܘxt!mqmڄ~KLL VC#I[g.s; n9 ryH]]30AG:"ihǁ1eԭteRx#]vR`u*fq:thQ@P,.OL'F?.]hkm >l5:*G`[͂x1IrX@lB gs<q .j>~܍B6۸ ޵q$2?v~wu\'XఆЏ*IgTH)~Ç(Yԋ#I5=_MuUf몵jPg N=ZuKګ>*5_ ۤUlY)CezJ'??<+V,IK2j) DIk %զw||c8hemr+=Tpl1ϕዛږmJ0hoϼyf^DB@1x Q9%vVxީ9.h%es|ݭ.YJ]cn3NJG^eĴ + Q3TbEPwZO|M=#f!_vJ>:? y& =jjoW֘ǥvv6r5*+.)EG)4 3Nκ9sy;G/:S|;vyK;O]'[/CN_&!Xnh;Ab\OKݷ_ufi]ΆY6,xz6'w;h^}3F!V{ {BAWcs><I@ >{4w _G/ONf7nqh`zpoZnMM1VL.d`Y    #BW$Z!PS)#FmZ"g&_> ٧r㹅Ii^ǡ˅rsi4]7>]] )_JW HdIQ ,1(t0{oA R )Cұfe uKJ(a޳fNG*",s_kg}r\ dӒPm+_7eSL /[d< /dyp;}%ɋU%$$oA26ȌT& ˑ.z.m9Ҳ s (%kH2V݃"E'(h)TQȋZf rφ k_ߨ1d@"q!rI*F,JmHEJdxtoN\o {8(\x\)9d@7,< (2Ks|{]M|Y>=}]@ͲVQlX؀2,"6W FkBJK`Olr"\L3A(x5Sl# *Ƃ T (gY鍜;M>5A>RxIckܰ++UBWi|3#UX,#' L.Pg@fb.`k3,EnI| ė")F%-, ^T>Xюy+c$c)E[aZ (% NYLj@DA9 ɛ"35C־hw7Y[x3h|&w2auϼ?{r9H ڤ0FhhL#ο{Tߢke[8x#dm,?'G@猚,֒$>e=wo):i#!# m;9v|_ۿT[2-3{?g.%Abʹ,Y* d]$Č+CN|*&2|PW v%+Pyo5e673& `l+LDUqW'~''>eG*^A47%sr'ěJP2ǢJє,]ϲ p7hPJE6Oq̐J(1c:c" +H^쨬cnyk647~53;\#tvxT&;Fs5zD~΃#Y4_b~_o0>a7|^|rt'b+yB :0+`ّ@uN 9;i-uމ1)+B|:@{{k'@Zk~s2%Ѻ؎3Z(ɴiCLJrk=RF{ӣrOnh3r SXL,wZ@;Ixb"2e$}\qPl4hɎC:s VuJ0#k,LEv UȎwVuDLlQ?{WFd O3))@c1p/04;EFf6E$eCKdQ,%WUU/ETd = u00F+c*>eJ$(/4Jt1y`dD!C)vP,_ &j6L]fͶ7♵oN"ԼtBc*L4/،+'r" ,挑S,]6B ۖʲ99ӇԆ\^*g"x&h)T.CJ6C<ej -ٗ ]FfFJr12C)COZ^ +Py- *+910CӍQgרdC7֗~ƍu[EݷArˀ(>-䌕=%E`ٗZ2d4ʦ#!1VjF8~{pvqMKJt(AH@d Z;GF ͮ xI;U,Y /R(vJ1/Yc쉳S츞[[Nc$ AGLeŵbhfu|P!M $>=[n9!$L"Jix1 BF* ,UjUEm/T>Uo;RnzS]a`TL9J('03v]y)cu6{ R{%*Zlk&4/^!v>-4ӹvic^d׺By[Ces[C7NLSq,җ_.j)={i}Ŀ.?wĨ( 'ɋ&6SJ0*)%T*xSr%!(zziUձedYծ#]++~ Ft-/: %z54&i`moXkKTd?qOѧ5q*,UʨTb #KN|Km(t;IYei>mtL+sNq9'RY0^cL%SǢPip:'3wQF6sn@gY=ZӦ. ϯRkwwl͕|rh㒱QoB۶_\_;nNԢᅩ_p㢋17n}idE#Uw_|q契l&Շ4K'3\9@Y^ %@&ctQ tV%w0dJ̅7"(I.1SR|cMWUwyC8MK4 =ծC_Koj)m̒-pᦕVeibJbfYJpB0:B p \+LQ rxX-wnd ͑2˟۫rWE#v[7KTY/rSgPwl.k!s 2;`v<0W<puyf`uxaP<:L< wjb)K.6M[lhmL<\JtŬ8]Z4k,ޖY{zTn˃^X{G 熝N p7Al9xbfk h=r&1iYVx&<=x4K"Dc_ٓu}yH(oE7oCb.݆{ ك2GzAdqn8J"$4d엘P*mUcqPϏǻqΨfiSmUWAV _q +"Td䄥שK8 )xR 1S>sâ)9 Q`t1}kMo-4nh#/ 㼲s?0楹X|lR$VGZ<@2 1Fk)(:+}F(o37tev7u ey˅gkc #(~7ofpoNg>N:Rz?O&kHG yZGMつ7&u}ߛjn/ȿ\pUTa&[En239$d5gz`M?XbKP_Cz{3{î?a7nt7Vٗ'8@5DKLƆxm3]3w@SHRn Ғ*zS d eLd|i9&^*^:^,“FŔ.A«E@>2"RQԆ$S =.S'׳J9$Y7O}W/B9Q$#lʘrW٠t<2[9t&a5~Ӝޯ;Z\zKGUvH~EѱO0hJ,d^sl+@$1B$YS q FP/X6:%yN%0VgϰS !=HcJκlh]J`mJHFF5)43y4"4ګId|IC@57ܑkDd6Dr t[Nؒ2b&2'!d/0 vIy0jŨ_X0tbLj2yuJZB2fς2o0҃URad)~Q΁:(p&U[A]8L@o?m3=>+܎''㏿aߏ8qxO?%-{"'xpQKf$ʷ#gHEeO#4 с7(_"`m#[ce?Ni-\6̬}<=TDJoz3 'j~wWLG/}tC/e7EVㅨe1|;ĿScy{M 0 1kJ@SInd1Nb9VuucTw(}y|lF"s'c13:`PWW>KZHTJXmJGlj!@@@#uAUF4d{ឋizW8mil`GCJ R}~*C $Тur /PiI$3hLgQ[g}rZ @ ('e+' 蠔Lc\c!fGf{֛]]Z w /".9uѕ:$H?AZ7WKD '/u 3,Iy?A ;<|_b0??\gPW&ZgU$SV@Kg4/\2qq.xc׋|5;7Af聇B@ huޓq,Wx#}0<kQ OPa[^9xICQP4kή 9b3t5dN 4PL3|f"|.?ʔPyW=_)'Ws`H7nQASBR*)USﳥ5գjx:?pՓyqv %g\/"^ +R=BZnyqv|nq/B!VHpm.v0 l fq#P KLTL|X\\7a.uӤ*A6lڶv$tM#4<s> G.M=Y:;8V8VX9@jͤğdz9lx{rWɋӿūSLǧ'_(0)( RlCyncЊ@4UlRvwDtOL Dˋ¿T|yXZ*)|曯`z_V"ZVҗ"D 0C-.*_m^<1cuq[tqYْ9c0Jo4 &Q1F_@y vmc?Ff y4H)ʝ`ȌW?v"e4"  D)N QocDa .w;:d!vE6qIE~CidȥﮪW_?sb$o@hwmlg:6MfW)Ηڲ)Q/݂fz/=UlK/XVlMŖh{"S/eM=_~)wxvd 8lmV c$[,bͬŽ/+UYYee0*KŜ8b3FB(^ eB"t%m<:_y@?+8jt䛼y?;k _tu7.6b`1u5G4> vۃik֡J?ؤߕadY7=%`"T)y JKI VDpCK,slodA;OSii v;߬$LrؖX F&1C;F4YTe=ŧ<ޏ>{,3o̞1{|:\{m'f))VyˊeKoP;IޅԽTjfލ Li.d\z)A>aK}r+!^ƷuT|u^ITɧ@qJ_C1P#5daݳðb|ǜcvfm %W r;( w-Gݽg ͘1emI=^5u\,TZP .zUSѡqpv^lJ|cVԸ?qv8 6~:u%>l5:*%FfAYx1Ir,i6YbSƃ e,_پPlwh6O!LS(՞c(),,.beVgϪ"[a lptPrLl*7@ŧYaSg18 Se?90cP(iZ3b""WՔ?՛|䣏+!y@l2UR U[]AE4WQzao ܫ|.mWBƷn1,*E,$LSX{Hi#.tL^$RՒrx׏i!rF/*gRbr}3mrV Ut!*ʺOR_ jؒc 严jF`Tn`RB JF' ]tl֙pd/;0&n+1/cDx;NJĴ^IhxR JRm Q!d왷歃ޝz5 ;;-Yno$'}gMSpp5Ig Kx5^rÜ9Q [cI""5NӽWW˫QըGjF0H.jJ4`v.j!Ae2@1OdiM_ fRh,60m]څ),{ڽ4r_}<#QsM2}$#WL_wJބadwQ qbJ˙t1\{P_ LSyo~U0ޣx/J HP:0#j%8 `FA8TZsԩKOν=}]&b*ɴRU mt6 -w|nf&-~:gwJH/1x&-xFS TalZjf8;*&dSvh5Zɂ FAm6*GT )JSEL:wsv߀G$=f&ŧ(}5~E?cҼsr܁ {޻q!ɮ!6 +y  :=V)#c`hh ހ ނ( 3Bs4vS`$Dp5}{uF*U= JJ%dkRgjg'dx T_~Q(M  *4 joe@󢚧Y3q%|@v뛬 L,`,[Ճ:<`7fhjJE 0r, ԖizA^jFu+aM$=%QW3.r."1uĚ|aJ v(|7"BmN8. 6uXE-%V.!ЉH%1QMj~'rcNAW )4Ts|-US,uWO^^L'NjMd0!si x18;hK "jŻ餼&q|nq/B!VH i`7 0ŰQ@DŤɇuCr0Y8MZj6mkM`L2N7.B# 0p1ȯ!>ٰ"IRk&'& a;ۓW/_JO^ק/^bN>>=y@IY9 -FmbwVw5bU&%}}Jvfd Jg_^`\6 ӟt4ך-L3|6RSXϰr! lmqW4nnEclMNNߪ͖̩|cQz;I6'1|c?m;12K~V;GJQ&?Gf|v80W߷S)#,()C$ \ JqtZI\2x# cpSёF>%ɞuTwmI_yЗM0nC؍`&`|Xd>4)߷/ɢD6%R~ IG `W\[&ʂDD{ORW"%)S&TM*[&*Tv@]GrЋzx+́pq^ixK=p o X"E%(p%*:uh b_famVT.RUt Y|l\ #`$PRj@,pn ̙CNX`"39sD w\Ap"Hk\0)b3g-QsetB6շawuK߮ͫh } a(îy;]yg?&b4geX|]_%q3.Hz!lVkmu={ ǵ: i2;Χ7zU#[O*%8N2qJ#D w8nO*q_fl_9#A94&rf-`aC'b) !\s,KMNX[<[ $4" >/KNQNnu)&o}{Ȓd:y| $2غ4yeyWQjK#^!3VC/{xCQ1U,@+Q,`m8R#4Ѡ"S8m,ǪRhc"Q*a.x 'GBDX &W '2֝l~ F%zjM,h`dKϛ+:WV"?\ퟧʝU xf74=ݛ_6݊?u]e  no:VOOO㭯B O#x??ڧoos4 'Z, “D3jmEˇ77><ݡ'o\k}{cBQbLܾYeݼ=<i·?kx33i_ꭷY+JѲ˭6Q b0lOѕԎZϸ؆6/b; ,fGƣxO=lh];,ݼIoNIk#R>mbwDH- {ݛ…!e 6q HLGjfKnF׈1GUJuspj{slfY3VD傓wժGGGǃw9{#Ҍ9S4hbmo?SX(16Lv*3m[#r yHG4ɿh'>r|(LkN) ֝,bu/!ǩF'[ǹqyCrsr]GgсKyʰg9^eb>5Qh5k#Ȅ#6F[ $*V)S!1BAD+تƠSVrF0AHՖպӡ>P=Ruc~e^7w><o@v8\Hfs ZQF`RyɝAQTk(7|L+]Î8j G\Zz`B$ Ϣf {I#^f9.$#*kkp.z{D<+RqQpFL8M\p^($E=W*W?t5NzN9mvx.# y-H_ ''Y#!$\ƑOƙ%#~ 7/ !pO䀰/2 3ZǠ^{"ج?j $LL\9`-ɩnt  KIƔ Dy4/K3YSj6i0 ҖY(vYnc59/HK&aFގ|$X~ r;|`#( du;T62пxhARSъ q6ws̡UPlٗ'8IIe>XǸN.O$Brţ"8\vLn%a唗:ך"i29ErKߎ[|NVw:l =J/מg_SC㣩6pEU*JNAc숑TFuDZb,|T# !`¯=V\f2==Bn34(m-9J2 5.A9F#)dJ r$XLi(]T`&Ihu9IJUjgǜr'BpP2ck'ZfXD=(p`j&:cb3Ge= dg~`7( p2 V9I1 n%)DZՠvRo']+$pGNh}> $+\0^cU"EDb&Z%r f2T1zQEpTj4HʇSXqNq q%b5 ,POΧb UV#}˅CN2=F"2|:Zك$4a C+JwG('gm4ɓYp\(y@BF  \E8tՀ϶:ŴʭA(Rrxz+ R eMFgXr]4+pVUCTrR_S0iM$W&as[#!!)xʹqQ7UգemDY]~?N t*i=b< nڏ߽Jzal~G5:ON_ i|C y>x$†$Kqg5 Q ⨱SV}j4xKj%%K0ɇ$iPm)*NǾQNRjg#>sJ Fʇ]:?ms |>Ir_,@tg\R?y/S˓We>J@yϓߜ6/y{2<뼋L8y}BY! /i^]_N ygϟyΗ" g`.ENyH6|mfYΧ -saЛVYY._ufz뻭g-g]<0h7n泫ub\=1.G%aZ3#m#f/"_$a~NUu{٦yVϚԬ 4p2 kmt}\K[1|rs>+N6?}/^7ml|`âx#PnEb-dў7똢rtWރa_@W+ tywX׵PuܱۺZ+mMS-Wv5$[B_#` 0$%n\.MT KMA`J,ћQG 5Z#g;`z cg{K3Ƚdg3^ҜHCWTʽU4z4u\zv qN?[wA*[F\ݭ+* bz9;G}e >&/Ogf)@ΚVBK|;t掠3VO}4laqfD("!3OT3ǭ4na(.L(gg\laO@2،GqFw ;U9`uLqT Сs3K&0+<#qDHsY2cIى>, L$H(op:xL&Qj `PT/.S`q% 續Y.@OȆ1l)J1g&--<"<9yn/,\yJw%)&raVx,=%i6#̩G٣BL)0@o--`gx q6\KȨ`(qlرHR*ti dRBm2uޢ_#DDŽt yT" :.p l>G@XA[b'VCǤIpv=EzVS"5޵q$B2RCɞKrA5~TK\QBRv~Kc(0-4kz*JŴO$ZPPOx1A35[ᵏT}VQ"g\3GoZ~.5Sk#\DR| M@Ǿ%N7*m%NԈ|c LE#ZAk"Q9͈O$hmAhnSN6#%.:*)SkS5 ϻw=p]`X^_^7D-V^G|gIU ^֫ǫ1/՘j̋y54JD)sL3+e|!$fNpL+Iu#.%eK}A$IJ6 HJQO1&$H&<5 #y`N !R-4S/?` }} G8u#_.ҵl .B FԟSS PWe;Hii!ĕqng`[4)Ę$<&DBI1)-xZIElpdrp2q\`m"pU<=UBwbד$46>}W/6XJTLQhYluh @ap#4*R**,ԡs \gd3 5KPܡP@\9KJqj.lIw,;t*gy$AQ{K/ /zCNѳB2dy=]pgVs6@$֮W; fJ ">aҜtlk.O.%"0B@?3""C1\8(iύ|@2ȅƧ&7Sz:s.AE` 4EKhXV#gðSJHJ?Ĉ4)|&Q!s":F8A@hZ]-ѣaH"sE!8SZP̔c]RM5ɡtksIxM$KAtIFnotZT,W%ck+c$c;@#`RjDCBĉ42ŭ"ҐHledl!'U\C[9&Ԇyp4 8" 3NX[3E wD8}VMx7,|UA6Zx+e _Q+C"\* 1;3>?%`l{4Syk%O· @Z a–hT`d˽׹B]m*0%O"htp"+dFg7(8P\f B n[y@C({)OY=n.I!!?BhGd40 H ȓ֨_yhϙr^J <ɯc>,Jx+H .{J1)7,6?Agg_".=qf •8ЎbjaGo.g;ekiH0$?I ̻pea!E:~^= LϔK~xB2dy%g2ǿF8"\('쎜]i9̍Fp{g1*3brnQ@Jz!ԁhɕ.˔e"3q"Ďc;Ν+dpe_O\DۻCR gbs>YhN9[ghBObߣF9*9GG4ݩ0#Y~μ-O|{?xΞCQ[GzݫjmBY}V7׮?[_ Bf"+grLuӰipyEfy?(u`^ltG9񺛳vVF:ɺY5O;Q{%Ð/}?B:9"~NhgP%F/j^sQRtOp;>EJ_P\wQBGZpO`rQRG,h%*RJ;}%yuQ!WL(Q0-ht2s$t21[6 'dSV2QϪ]j 1{#½,-^%MY/ގ3љީN!JFF5'u̲*sH_*DbPPdzۖ9ZM=jyMxRДA9\cɇDZL\{LeL3Rt{DnSBr#yu)`62|6傺4/V!'$gSUp (ȹ-lNfF mB)t}B(௭6&a7< nHerwy(Xg5s9r DQ@G[kICk|&] 9Es;Zy#AQ4!:UjI:;<ܺgs!8Ed*'U69&I(:4ct8s9NV^yct Z(q@QhFd+1VT6K}pLV#B'i=>U["O etH2I=:O#Z9pIV8XexYd@O[5(cQG<'I#+SjNpAڃv;v&Ύ97K77&ґţ^dΔ)e3}sqؑbJ3v0v{r@SLK~If禉*-h&~ dQE}E Hk4Q}HHWi;zЗ9 u06%/޳dڻǯD?s+:u+(Ց(̚#P*e4G+j^5RJ߀.F0k]AG^}ձefeuk=5~Hf1c1-czEy~I`d-rZi%;'1IwCj=Һ=M`eCjPfZv>=ܾFn }xi5-̖da~x4~^4vj(34sQBѤ6Ad,&:Y0X&[sjo@XVڸQq sh2L Z$AK cVq/;u{C##1muR p \I'͍LqF:!Z^tH!H{;(*F ,e[TDspTz8`%~aq]! &;(1C@PyE QcTwp凌y|MRT> ?, qw G󤄤}Fx6 r6.,z?ȏ&x.Qk@PR}L.g\X-P&(efŻ=WM˧-6߼8z7EQ7zDi.J=ݿ76}-09A:t9IpY#Y,h$VNJ<7+H1HAwΞwYE򔿌= /Kw{BZ>}.Q95`M Kx4py:fYwmőՁAq0h-?ܘZWml-6m.M/dݶ+7\wr\{Z˒2 B+bsK3:~8-3Uxu`+c*wX7ޗ96iziR>4;Dgd(l}wgZE߽vRͺQtk>s=4ڎ=l9PwoJ [1%w6+Ǵ98ל0RaULYB&k>MnoNwSaԥM'N:~R(CPkLrjeQ3"+dHN6^ א~JV.9?/)=³~aA+[ZN&l奔V=|`t H85QKѷ*tX+'SaYj YdyҞo_4O%:"Fg ;AZ Nhu;goodfb6y9 >>ѹv͑aoMͤq4/i@*X5(F w^4,ӿ^vVs|)_ڪjxTNC9sV |!d̴2I09$ bA'U%FR).T.Je \g@ds! O&smbTMh] C2OcHN>f> oٿ ZlSyϑdf%''ثO^U'/}ɋu9@ҧkh羋{mgWF(x+Ѐ CF{H᪳ںbh'RPr~C[D!EôYyB(&c&-k.[XU`X,wy 5i{-9Z[庙P]w =,ebn'W$:_׷L}4/QX&jabNu.$KM ^Pp[>,XGNsM$at'399n i#P\eWg777R<⫫-fbVݮaGr)4ȫ[kgy-כ乖)S "Y79HAݲ<.N9q'83k9<6I_se\|t`L"J+T&,[4%pds940"Sp A+QM$kGS{n{cFϵH#n!ژݵ#ڇ(x[T}.x,"X(LdGy$9N$)1!qUUbiH !&q!X`詁hxtKI-X@EcyEj6yLA:Rz)FYAaLJKFJ5']ifhD$)ZGi1N"4 B1ԅ!XD8fG6 ”^tўPe+RtYEj'NK'j0 t;%yTd@*5j,Cm#d%}2dZ590kzb(1c`vJƶ;p3sh$G<;~\ZYJ Ƌ6mȘb(NvοjP*OMiq,jpJ,٣hm.LIfeX2Ϥ}X짏Fkj8d`1wz:?%k0ݙS<8pK9Sedkeϲ·1`vf]H 1aɛB]m:at(Q)a 298hA'G@HZZTZ_@4Jifa'j,(AKOдf|yi*bDa L"FznPA؈J μE4RqKKz"K6:pTHOX>(32FΆ~I~5BLO?~ZȚ+#r&gïCQ)Q"MrF NCF*JIsa9;;SOŧtwv㦊d>K4Z6o~br}sz\){fijZѐߚ:0~$F-W{*5]9`>=yYf_>uϷw0e\{]^V]qRv[ݕ,'FRu$PGnFaV{bLTG Y,;pп[L~uohU.nmԶ{-ރd\#e`EPg#~pǦj[:մ&oYo0,nh9{w?Ͽ~|w3:@y<[?G3w=4 Ukho1Dchcj|wky-nԠ:~Aػ6ndWX~٤6#ᎆuUN:}Y 5E*E94(Q"(e Lwk/lv=)N׫&Y^&1?!'8kꇛT gxZ! _N6OT_*lci#UL[TL jF98+om({1ا~WYAF@%K9:|x3(g_Q y֛a<Ę@! ΡrfNqN&23zwdV=/}9AHv'I`C(ƲvGꜬdOuPUE5xTɋWw$(jvVo9Y VR%ڝf^$}4'tFʍ"\!*,Bg}^pd[VT/6RUnI5GvbdR;2L̵2r-Y2H\eVP6p^v9}V FeH8se5!a Azo${s_roYqJF.n?hd]uݕR_N=*)֐Y7++%[B=egӶ^z,c gOqB&ܠ|!GkL0ȍKA5h$S!(O#4>*O#ۋ<@$ *:T{c\$ Rf@Nh2 LOΔ{3^RQ(!+h7 f%7gZ;`4$g.H2]sxo;pd!p!q]d$lBR^"7N \gV#~9?X<].<#.9(~N(O^3AխALW J  -T**s(U  Wr9avW닩eV?9 '1Jˍ5 =e,zuun]tGYmvXEeL>H9dsG"8O U Fȁ^DLQoV IΓUxn(=V F!EMjUZ|I l,2FϓȆ\ƙEYY:E" btثj[Y)^.ID!Q@#􁴍W,stA9+x抓Nr uģx,XGgW"8MЋIYX8앆葦}=9diDq;$tq4L$3< f|sF\^l;y CǙtJ^ W3gUShJ: ,5ʨmXY꒖wI˻9'Qf@D> i8^XD:2s2jEFg p2srYJf.s>'gJbe/jlXs)##%Iڣ5>~q㫅+zn{&v{7hpy&jrM!^ն͂g=MV͛msMI:6g.~Ac.o =黛Nitzwf =Г͌[7oo|-AKOw &s;.T}ib/[nmLbV:%AΡm!wqm mҙ v[G/m^=UC;&SÎYjR Iޟrx864Z2h2 3=;=>AݸS2(`nH(3HyЋ#@rc,]7A`-X;99e< ؇"QEVO,O93~,a{7EƂ\ZhI'gc렐tBfȜ/'5ڀ)Q eM<&YyKI89d"sf\9D0+2DK”E#P@4H)"YPh!al_v2g$cd3f"hZI\&D.RРV"H@\(+*U [s IZ'iRZv 8jz aH)!R,#y[gW3KziX?Dt<(1RwF?~}Mo<:>;a>d&RH.֎gZ|F?ȵyF* ]/EEy->s F]r>uEԊ%EP)T@u%wHv[ Z2 SW_-^ ck”?N/כRFw6 x[+yDF)&k8g^IBt CT)ᔢM X+pV@(]00*0 1/7o^:w}z ߼Ο{nKJJEIJ݂6SmmXaY6A`hʹFrKE/ׇ.f><Hmw=e|ĘS䂻R=Xt>"1)!ĀF-|A|BrCk[-ڳ4) kPXem^@)X-Iz4R ʈ{%p/{< Eۚ'fNXWuL6 FSqaA?AZtN@8B9HI'dvTK&%Zp0>3IdAAJ]B_[٘ds,;A+HeWb+{?gBSGgw{?Hӭa;T9Čc$M*ϜCE&gy~Y`&N^|2nfԒ'iInšhR6E2 %Gg)G~20:<=kf08+GHɬmdvrfjAn 37pn rn-f'L=eEtzgU$Y}5rח/OVJ6kh&GqyM f~c֊%`JM7=ʘfO-o]靟^]x >%x10's+.m 8osbl]OڬI{:]׍X͒fYޒ &ZE3Xfj>ѓ>^ nk!MHX0ńWc~{_ =6jѨ!mX8݋Ď߿˿o}߿}-:8X*g#٭I6X|ٸkV߼k j9ل߯{k񙝷-V|o$yXsN[xWMtu=L4 Mb~pEj&U*gzȕ_1 R$,,vv|0ıv&YŖdˎ$+2%Ab٬uE2  G= >5[ lcl^ҤT A)fp*DvI0\r//?v_w0Obd̠HQIQHZgOem ,yG1$UcVFt(;sq9Rl2 QwhB>w=dEv$LٮpN^[4yjHA>:{ĝ)YxjonRٞة mY~z5lOfcۆ6[6lPcm |5LTWj*Pj5vo=iWy/|^E^vh%55%Ce 6`uEZ@q,GCvcm5T#ق3Mq".:xYJ2t)@ UMN4v o54 VjpNzusO!J-7)BÏ,SJP/]NOR .EOQ :,8PcHcqP#3 L!Rl ))EzS*JBGXI^5=HQIc+0Q 81?Y'tQO0 KaÕ܆ӮzW]V/׻?ƾ(z<#<a]+KnS52CwOK<=ڊ3@aN,->a#Cƙ gL=YD.*P:S 4oz#e .LVh25(gRrcY$M(1-2(ؖB&(z{89̹\ F1/=ϓۛ|*~3Ot]- Uc(#T}1T]gkcZ fīG«ҝOqn>%Qq H =/ X.Yə>.ۑ7Q&( y$%!tV RhBT svJm3)_$EN;l$w Qy9@X%ƅ9)w6#gf\뫋eEH/`g 90= xu)-G6LpHKYЮַdpJǯ?巏Eo ƥd,TKA`*bUb>2sV ԧyeh[E0eȫ dAT _*tB*'_Lbd$ʝjE:M"5Y%HuJI)!* u))8H2FM,~D~G7}DU u4Ԇm%,D S9) yJG },@j<~IR*N]E?Y[2lrhGA:jb\s^ 4QϚcUNcW F224LH^ſ@SX&2A23ō@Riv<K\7p5u,'qڱ=$}dzd Yev sϩbp9KjڨJGHu$HeوWT,ԔtŢ*(^K(V zJıX,V[G|Hְ iClG>N "_RTlfU FԫFp4 -fۍ6j..&F\Vax$9|t|k8m&`A,.:!B o)4i4uZ);xKI*2XE/JdE&T/\V,.FBnr9[S ()@ꐢ/Y"R"W.I;i`RɘDe;GVIhddXrdb¬kB伖8)IAiUŦ.Z`JS\asO luV4xe& ܨc1Ve2c|H$>oͯ%5cOڷ:`CT:9P2RH!"dt:#% [fԂߌ6riWa{`E%mYd[;Ec HIJ(Ae2+LV6SՐ^ʒDqNTmFBU1 *kCsC>kFΖ|oԫMoMD6i6 H";0"PƛJFmYyC~0@KQ\Po!IJA$ey[#,9 K)0 N hJPԊ74I; Jc&$(pMɢ){BYIf5(8iJ Y=S }#Fd76\#B)6Jo,$4Yd5zTFψkƍ ).#´N2uH{j|19mxR JgͲ'~BXbSH,HyWБ)Ɣ=ڱ=T^+w`cVpCYwDCY$&G 6 \].չ:W;r(C;*vƎJ1ikؾɂ8;7B3xj Ğ&R\go.>YLnާK'ׂAUxp1r` %}ɑeTd=,1 [{F"Y)'UGg/yX`00ef1n3p~ʐtѹx}o㺵{rkdn0DYe#ߌ|3H1%p$SHRrN+ì5^14SԊ7U A3k ׷|7eoqxuV[=[ZcJζɼg^U˖S|i u%:׮kr ZU~jcB"zhB;w^YPgw ~r]%vڿw߿rC~y+_m8K?^M`-;^1}vE+%p}ۿZR~N<拞2+4SV儵3 z,_N~,_|a  5 Q#"/_A|^ן e\=<'.'/ŋZ=N.33ɿ~;N 4]u Ɠ/W{R'Ww,''s/W$A+uʫ S(qZb-U)ǿe2iZ)uRR~v}\ɢPA :@EZMZ. 3 т9@j6X,'չ2EiaG z6Wst&ڙ5@!$ś4ye?{Hn/I;Ydv]`"/-$غcɲԶ1 f*&WFyg淠fiO |Wm > ܋b(|[1 Xi|2?AOn8zӿ3&nFDuށWr2&ZAx9:P/z5x5^}6E 1GUH22w!ی˾Z$!M'4R\LJ1.rW4+M6Y2eRyڐ050L]+tL손vicg)Gb=/?/-0GS t}.\8U` ,НϣKÆ|\uQ*V 4wH2D%"62LU@ݮt ev',#ĐN >ze+PmX>i3犊 e͕acE;EOV<RJnj7Bڻ6C/;3g&jnbNQRفpMk ,Yά#mw0Ѳ0Υ 'V?1cв-*Ujŝߋ= ڋկρ>BRVSL9ճ|1_ .,Zz솶H WhhUC{ +⾜zEZ\)eucs|JWPjRPcv{6IoVca?E9G[%R+ş�*tq/gE#xɄ+ՉQ,{t_mGi;{cD?cW_T'8A!)u-6imBo컳f8xq\1xUᡔRI&An~iL¹<ҼO8 9H,~g؇38%CN3gv ʰp Ō=|_^ot3FBwO?|DXd8F$X˝JS^Ø8Vbgh^3:c;;XŔU9GF@xg>HM{el}Okni L@p1rHfcdч/CG'zryrr'ɩUr%Tk~ct̕1W:J+[ie+sczS*sc%1WŬY+sUJ\+scV1W:J\+sc1WU+sct̕c1W:J\_FBRH\ V~oZM+i7ߴV~ӗĴVV~oZM+i7}G m7uU*i7ߴV~oZM+i7ճٳ)Fs{OF7ow}Ρ 2re5%{M$M Lg6B> )4'?:R )…4Y[er:Z(zCQ:Z2~)uq`8.2 0|y.TtJ'Ui'iZhglS¨9nsrq9+}if1s.>tp"… cW{LSBVALū«ԮꝢd5)G¾0|)CHxp h3̭O[a{@c~X,=CN p#Zb)iF'DG'cZXaYׁϮywո$\pZϡ\}cnK_”n]dtvy^xὍ"/MoIc2pBd\PS$ fELys c}@124 iI-EgItL\A{6`9c]L- USGS4Sج̤*G21*-(՜t ]QΣ6+3? skU PDfC6so);lƏ(rksx.A=@%zN RHyTMjԜX9cʬ\A ,dR(*;k6֡  _. }1l(쥰ᢴB~lPʆpPO{.eKqi MS&yDI"V8 Wљ|OW4u'x9#B~/v<^ŞP h K !JP*և&Vjh[BR&18twq")%q]y_wMpj([Q|k9.!itȒcVfw] kz»kaYs7y?;خp5\Cޜ?wbp^RIJ}T?bcǬǬN_Qcj,XVa3xEs>OwrSl>CU !>~]菧$sRv/W)Tv]{qb7¬VܕtӮ*\քXuqej{bVªG&` "Ü{6Zg 1tթ{>Vտcȩڂ0.G=_a VEok }^*mh7*G9SuMhlNDѶS툢m}ey1`M=Hz(V<epdhmif+ g> K$r$&3`WH 7Gyِ(=:h%;2}ҥ:~R%u\( 1RN-"teL93%VeZ @O@hBuP`܇3Ibwoi>ò yjz,׸&B^q[ۇ߽ԯ{w{^7k0M(|ѭxa+y]sGp!8DcII^#fNn]Fw1Wt<0U:-V =ml%7(p>EB}py_jXjqgKFUD5,iAx{(jXZTT 6+$)c1dhf3-X9j*R[l*nD_/QZhrBSWb'Ǥf{~kgzYu6@S ͞jL!L*jKH-#,B4+cK3"+q\4qP)Dy4ނVdf)4)R-WeLx1L\-x;e&6 [dtfOjזfyt!u}TrO)>նY@l}*}Iۦ_'ٵ(ڸԦoɥ+@hsӥOv|b [OwL<g3~=hͣ6wfnޯZ^pR3[rPַ796s~R R_}ӒYwVz!q15ݟߺmz==OV9Zls|sM O웻8uݠ] ;c?{=e9[ J",؉L:9.{%~(O,nFz M/>x|<R|;9F̱[q" ^19\*דEAvGǥ5Қ.4yn!qs\7/sHE%c,se7ؘ2 BҔw@ޔsEeIॠӨBR b19XHң.V2UP:v&*J].c8<W)]4q'Uޢ1坤<("DE&gq2M4QehrS˨1fi9ǡY0'jbϣK,)4e)t Hum;g ޯ( -BVm[T|sHӋ[70lm9>+%W1E3N @T޵#bi3*ail7߷/j$GuxY/0 (U+IPךt&Uub&T.atO+ \gKEDՕ:=H.k7ӎ]nupU@F' $puFv)hUV-c\ R1IDw3zbw'$mcs$qR'U!"AJԙ8.WcVltK-mvhXy?'Vf+XFckL$S#7TX2@QnٴCM2oO玍Q F ROzXd= lIAcASɢ6R%|9&[uv=a*YޢdLg |#5arYC?P]i\uS7D@.D*E΂YcfCV2jc/d=3)e 8k6܁Q2N^w;{CRya;TރI5((Ё7H2dYj^QieY:O@eˈNh*@PZAemZxY3qk&y (*|,&JIsV@r6ބP2i*yXJ ZkOXJ% yΊ>[^JCJ)աV3oznz<"$Yb ER`&S(h#9HZJf;&8<+X "J]6J+JH,mh l^'3bzF\%ح݊VٟmCr\j36z^p~E_i='%K#T2+4KV,^tjgHHwVlU0/ #|dx!Ϟ%b&I=W{V1ޞ44wS|q6'?O_xQOfE;7_ד}_p?x6keb7?S-Nk1hT%%EG"ªQVGYQzkWW]mGZBZZZpwH"j_ ]8`ღ(ﶢ9?{!.GEQN`V;[X4OՅf6ws(0ugIm7'ogu~_K=P)}XqMmM -OKW<8Pq2~Ib҈h,.Oa]`zc1AEX{ @h'ޤyS/ħ'>=OO|zg+I TIZDi i2 D[EN퉅T 15qApGYsf$8}ԥ$URT(39Z‰Yt N&IIQs" i=(3,7uM)1Q}{CmF=gŏ0(urhQ'ϾwH?_,㭮>=F=OdMlSv67\&lu;|w~0r>G5[|03o[=278it=N(6|לS/n{.jǼdoN%F{isNHaoؘ{nʾf^OI7mKT Yc,#d\o.a2P(-)T\PDG6M)TkL͞q=J9LV/T}pі͗>-W'_'qތɟ ׫+ؤBʤJ)1c .DVRӯ"&A@ckxjbVmjKP=A(p-;"5:=H.k7ӎ]nuic #)F8:#E*+#(C*eۊ7g?&<_5 ~GGlI!, tŢ AB l%L"+@Q0hjb껥uK;zK;i ۃQ4=[WO SHȜ Z FNgJky6h;wȳRmϳ1]P>5(mE+ȞzI]v?]ںX&<;ʲ ;3Ԋ睪Àl,1JH KECQz@&44,@|Ĕ[s2r!6$E΂YcfCV2t"I֣J:RPb嶣Ǽ> v%¥dX`g J%t! b5SFH/רNHV, !r2ƨ!TZvLٮ~;"3:XK6& 7QMӰҀF&IVll|87-C$5t{r'􁽍Ws|*Y$HINy,cJ7vqƎx3 :ueRkWy0rojMsS Q 6@q SkM-fЛ P牆J4p4C`Lfzn)B0rђBʾNE*g=,) [Y}YcuKsuV4Ÿ?^aHfԷz14~?9T:/ac4MP)"LJ8t Q+æ5^A)jɃU»M @ jo'-GvOmjZ/o&t̜ܳHsi4Y7tpI*e2STx忮E8߬;m҃`]hN:}gnC*ǃʫBQ?mp`m;A$ɪ ++ws% SC* )PYlρٮ0!揩Q3ؕP4rlW5RP_zKTjV?}߷xX|4iQ|/Z|c\0T,G"wY| wԨ(K[` 59N'C$AEYlV:Xeȗ3)[ۣi{<@X+c׮*7Xxjwi_n>KYj6c'N"'!J|}߉C򯯗_}~RBTl D0^,#x!;0Pr$R2DV Ph9KD`"8XIY!]%Y`ܰE,es,%ǟSrO X ڔhNdS;p]`Ƀh(-\:B[hÚV6]msP)[/ ^S |5͒v+F746BKMF XJI)!* k XA,$H6Ǝr{ [{5C[5xE2&h6MX`TTqٖj*-WT)i6ϰJ<0[Mګ0".wAЁNy 괴i(z*A*V3ss:˙}N&uewk/]cFoֳ>,wa&\_,l4fQ8D>S6Ƭ B,Y+!oI:h TZz`xR4`()e]m@cL Mn4!݅JpgxJP!ސLpJl?{eƅ?;)Ap2OT Gs"fD>WK>IF_y>Cwkf:ŔJ*l '.O x3:Lt&`G:j.]sZb5ZKwt1;OogDU"9,QX/QR'dwoq_Jf9;ȏ&c}4{G2%o_vwOk/uJ"߳Jo.ro wB> {䣷PoFgiuWۧ9Sồ VEa$] sOln5JUmIuwd9y-n$4kG|uèìaV5bʓG Y+dbcWlwQzMuݽaOdZc䅍ԁG~ɔW rGg6KpW|a}e(?:_^?P|Oo[WO7yaY'{`}?C;C -n0C|¯׌{Sq\if_M鬔d*oV=Usv'k?If~0OGqT8/"#RŃԻn@`]^usfd g\#EI%ys~||&R:*KIQN. %`۫o8om}U (cvt) $-&tg ,zSnjw1&.f 'm$V\lNA % `B ;;]o71CeiinޖV[_K9V^zT&Pf*+ŒQOMA6=wu`:q&1"n+=vgP&Iܵ>[Q_FM/DbV4ȏ<;~^Kݬ*sxGޥXX28J`P*Zy&tڢuHS.5-ݯR!,ˌbhqt$ eR$B iPG`{a]^}ukB=~'}KG?{6OO?N`*eYwGdm w v-دnqݙ< CCLxf25bݙ냴5:ŌK^A3Ua>m4V7X{$_E'X`wk$PmI VY@$)Ql\dqAҶ*XJ aYN`k]N\g]JZ-ӸI)f+s wemZ`W͛٦sUeNW+o.fLq:zW ۫oFGdz&M3II+Į;`wo U)0~ǯwW 4E!}B!v6&(YˤhVuǑs1VL$# HֈED2mHa^[q) 'Vi빎$s]>+_O3?yA+s0{>ff͋f~c_5Sz]v4ڝFhwNkhwNF(ANi;vZi;q-ҁı96cx,Eh<Ǣ5 o<U]h<MEh<ǢX4cx,Eh<ǢX4cx,ZPx,Eh<ǢXz@jEh<ǢX4cx, 4|4HMW8_E;DA}Q : 8PE#A!Ȇ#wG8cR~GYaCu)^FĄ")aPh0NyW&M jDF5DA X(q$9bQ!XUX8%88קCwg<e^WskvlK]mv}8rQ><۳ʍǸ#.gXLOv.Uc e!ϋA'7b)}軣?>!nw`:t.'Ev\ g>ԛ3Y>q8So3ksJ(MrE"LJx69 ~znxEI2ds]Y$1R.)2(ؖBBxP.ĹgƠ˥].+c^s+>2ӝ>6TƃPT7T]ѳ5P,9E U4 ը Lbkn MKUJ[gw|;SUXAQA:]cBKZ#壍n7 m]/(,1-!Y(=XȘ96AmPdx4`J0.k)#BRVe )-8Z[^%ptr˸[-+Ghqfk CTxB QH$P9kِkK\rZL66lCq.Fg,cTs4/ςth\O Y"y׌g J%K eT b5SBzQJ+CB"gePjl- mbTW\v687;b~zPT)R3IsV@"lEc%;d2 TR2z Zx4{ 4,= pUअis% d eN2Xn2u*ƈLA2HU5uJ.%%37xlJ>huf"GwV84TI|YM= ͱK m!O AVArؼw곹#OOm}ߣGj 9!YL;š>n;nw͚>7ᛋ8}*ڴ5pCh{ץOf3B.6llMu3Қ[os4c/[g[lIպۦ%ڣ祖dfi@:iW8q_d2(F bq("BeD{Dܶ 8r$IUdT-ekQ2,trh+P㔐eQ4o`6+Ds}0TRLdTA j&Z)g7"YgUr(.ʸ({\qӐu8x\ btE\^X߶(>5J_PiZv1K8]8]W~b\ԧ-\l)Z\!9;(DP8Zy2@\ԩ4C C2ܔѭw 4Ш/_jIV&x5BjZ[(Ӈ77ɯ́{ap 2Cʂ[\eƭxrF$f"Uej*<g80' }\v_L\vmz)ga?`.*zM'iBAL# FJ+8jg3 )y`nrazS6^AVzsyy_h=h 75-v?lַeP3J^/kWSϺ zuʊ+ř`׉iÒw픭tP/Rk[BVmk`\s庄r+G=ҷt$u(~ = [&f"+3Uxp7|mUl6rVG~]*[\N$  !Yy3*AV)jJ|b!&NҙTs 1-6Gp^:~jܶԔ.ǮU4@htP)XrS)dMF 4t[]tT4!<&;y`ΊϬg5طJL7Ɠݬ)qC4!0 ^MApcG=sB3Ul՘sy5^9Wât BQ3l-wB6}J$:ēBZ1WXJƅ."Rzʐ h1Ϭ1&d("60ZE.YH0 i㐶|N}#hg>#h6M7}^QV z[ [YWJ\.!@lD4hi&{3/^g$#RB Y(oX0RX,+sIEh eťBJZB<+b>҃s򈶔 fyiݶ477;6ɨ9Eݕ'z6I/ա,x^`b%ÙPbLfo*Y" s\:Dlq6|6%(Pr-TgހOYސ] n[5.ExO1W+yJSVFӊ3{1tE\n/NH .RJ C:a2 K]"2WEZyB) c=\C2.r+^N"mH wWu^\BqK"-uU՛+?Sßҁrh'R&Iߖs,*?J񑊫Rsx6oq6p~W~i+>l0lJ~5u>_T~2?lv5aqnFlI6Od5$"lr7Wvphy8n0\ s:@pM/^j#=@]Iݼ``LQdsD[>]&(֜OF 槫F-= ob\=*=F-Yd8A#Z p'S) ,zC>!OYT 1dZ⥷e)Q˽N樧,lO'+m3%BOYET;cǰ\L` ɽLܳC^$-?߹ɓ{Lw˓/t虱 +)q1pU5R v],pѻl..PZImHiWl"& Bs)ʢ,{ m-^++%!ZZ3ܽDU ZpzP~FSPTICȢ,yJ*K})Q"hFQZ_1S7e 8Ȳ0 'a3'_spÆX}6$jPq ÿO~u3tr4כȕ*?I_v@e x l8Z5=MFj.L{ P%6fc99{"Z^7FS(?»3W_tMzRq0'`y/}}6LB=DYЄRS6G>= 3vhډ褅w#(N˜&ne gu*ƈLA2HU+uJ.%%37xlJ>hND392}*Ҟɕ3=zo!5z4<܍rDnR2v̉է {tѼSc$ՕvY@mvG.Yg[i,"VtΕluB>( $2n׵(UZyEKq6(?sn6?ԸF!EK-ἹŻ~5?݋zM] =]%Q`]Ks1EdO֗ط *I|OP?G|%ϭO@ߖ_Jݥܥq"U]KA)#U] PG;=a?a| 6tgO.L\}nG $6h-^צi'>'v0c ѕw{ƣo}ԃI!9Ye s~^Smb,jXϙ/]UyyHd8 諬cYҒ T"SHTSYcey9WuQM`E,Q)m (dGE(**Y[.$ D4Y">;.t2k.Nw܏ln|gv)+/j6AWamPb+9S6eLSX@b# UL*-jFOXs1Vrڈ)9mPs%SPDSpj%[3v#gf܌R qƮtօzԅ{ՅwoŚQ}^~ES|/?M_&0.^keUJ `Y0ZqzksrNUK;Fly Lƚ6A(h8cwOW\}M{T/rkxyWcݸcWm;km;jn(LoMAS5TX|d Cθy9:򶒱]auC4,Px]JCs+$r \]VIꂡȅnlׇ3~9<)cшcW5"qԈHQkRdKP:9V.1VD±Z,A3v&GX\=jLmz&/^-$PH9ȫ˼cuv]ݨGZ9}cчݸXMAu,^y/,M{SVܘ}m0btHD2ݧf%SJ-X l;Y7r)g3zDb;ft琥GE:CpIVܰ ځ*NE.c-d7!r5kQz E=}`gbj.Qe3(x% !h8 hp~A 1#'~ҭwXT5j)j友)EUW-"b☖뛫c;m 1x{'AQG#OZG#RҮh=z'&|$| X#9i~fRO1T!b]; .khD$cSamX^r4]uO<7Yg>)pG9/{CcpѺߝxPD6eeԨyak؎H@H[{7#j}& @. !469enN2үjQhry~.*3)ęlMbH__fb։h{1Ǔio_Fzupـ6{WT)ڰ7L d|UqYK׋d󳸦b4Mή6 ^}8{+rSukc~pKiwpݹk]6(7G)7Zr,Ue[dȰhQƻ`DJƨApM3̚b˖Xśr_m&Zbe0H/pSŧaT~: d9 g=Jz{`♅[`hxaZҏ^=N3ۈ!6Nֶk2XpePG-wLZ^|t@tCy $54}oygL X@œ wˮ6_]]L\&g"\,3IaG˞kbv2b&?NxWlY)9|V3ˆ/Z.ü*FDt2]-mh=[71Ptq؟v tryܝ65sRtRM5P(^h%BHڅao<.1!U揩f7uO~]?JL\( e[N:٢5 9U,;ͻy?,2,Y4("Z00ӄr! ],~<c2I\}+t4 mޝ:?4/3(q2řfsI_|Q˳'t|׿]IhwtREM=oUox"xfrKͿw~N([ çcJRXi)Om5aLfP~+X)4;l?#8(:!d%Dګ䢵m>XEP]LЗDfomv6E'NZp5@Z[dy}M͇oxL0>$&8!*hTJA 0h!7*YﱎGx+ӹYȚ&`{;(N9L8S=!s)/^MF   8͠>^^ C}zRN! \ E`#c ~@j'nI;!j}dc 8˄ʣ>s5) slQ3;!ˁ/Gzl[:Kf ;&!\_4yLu)>_0y`%ɶm`l%T!zNd '֮F}ahĹ T؉n \cѱR2TKЈYA&{l7rK8z{sx권5>̩W 7<5 Ci ¯&w{cqX#-=CN904gl^U }mkr1M:>Ue"B-U(duZUb"tU& .,ށ^lǝfڒ .җb+_$w%̧.N/$m+f˷S~&KZC2FрKmkA)c5E$vAkH=eE u(E6'Yq<ȺfV:J7r=%x$%/9Rjpp9yDojfu1Y mfؔ/_P90,U: /9\ (ADD`+*DEPٔ"ܑV:.*wc /ic Dfo+H`֚: װ <*F6rD:UG;" FwY[ZpC0h (ⰛY/*MCA)E6F^czfE<"ǺSA+,(1gKsUW>V3z/K *1/sDŽ߭=׼-gnX0,VZ1L%'m4#PhYg\{pݕ,q<ɋTrja͈hvgBgH2sYbHI囚?_\VGSa˽xB<6_F7ɚp \T9 t' '4_f IѸ$#`-n2NE4=KiYΏ)hJ}}H3x=}eNIcY t_r5fpB^>3#ǼKWpf_tݿG~|xٗH6/$>x1_D+Ozr'4t%c7^ W:t2We*Olx,֕|6|^MjU~Mn{4y-p VRjFPlQ}A[A-Ŀã4>=fu?wO߽?~w^*|wg/ D"189;_iuiaK;~u-oyuo1yi>}@;N뵵7 MŻ)It\5eT_iWXl6E?|CRT[*B}9׷qo_1H׽1|G0|٪;JIQN?h՛Ǖ3_}97s` t9 o@&91Y?fe;+,[1lT$$&RFt{r*Qqc)ƒ2CPّ>w=jKڅ1s9C ڢ.pܤb}Ruψ;S%TP{EL'DѶ.jDBǼzX7.V/RoeF"J*WCf^&^%ZV=%Z`D )Ѭ@m͓QgZGb:4㻣55MI(pE}WQ1p}%z[*w{BnTTA##xLug[z\aU-b@Hҥ9p xI: ئK^Hk4-N1+kAh]z|LiD$JdV<8ϫz>b:'>dӣ_<O?Lg !XF_V?"_ՙW,Rw S[Azv1Z Ua9mWvX"SdsW/Ofqr-de?T|񢑡H1{bAx{b/zX"B:8rV^ιJU:Y U"LmD^,FBSxr}%7zrs7Lǧ_Gucp1_d@s= >\Ӆz¤K2`c-ڳ^(ӹ}@%ߢy+!H^< ^H՛7|>V*ֿ9y4# D#JOѶ1$U9{W.Y{F$; l/%)ڥ$f֖d[wsz1w1pj(GB͟ L(~.jD Qz NhGlȕqUD*'q8l \,8Xn3ث9۳X{2G$S!Ɂ5(*` H8LEv V ɊLV; T:r mSR ,XkIE𺱝5gG;ۻ 5r}y,B0I͚ðҀ FDOdjU!@YJ 4OoE@N{' P2E>pTv 2GvS5`.RJ!N-uX%J)&fPHM!QLўGx%v/p4jL'H.UxG>F6I+ ( IlAg)NJu9`m2",[ܠ駎Bj"5> "w,Ԍ!IiҌ=Cdu?,%]tc`c~fPց_c.)L6)]Jyd$#QPV C 8O,vx` |9:/ *R Jb LǫAb-sFCjW‘u75-\-NӁ=󮁫K|7-7?g-Y8i+&k8xG/y2(єU~ --$N'-$a]Q:"wbn 36hnP^r7zi>76PFi($2j$UЕe;u)\qB+b5L,0Vi;mR+duR/Gtj[SG ic·\:%%'M^W^=m!(ܕv 1M*&W#^]lC ?5 o"[J"w`!wQ& 'فdv WYr NU횁%XT9)o(ZxHtDut2iƓqh8bڛ}ι֏Y2{Ux5rhY@}nw#}:*\wgw|vnqy:;]svgv|qn/F4b͇koq3rIH;R'@Є.D:/UNY;Ms짞x6L>q?LDS \\-LZɨsJ@kmz*l-hLTĘ)נl[@Q@MuJ(7L4gY ;=_)EXK 欏{ƇolQW~yB^q+?#y 1*枅@od`u (!J0"\.[3&`SN}Rk'(TLdE sBDMNJ)/tPM9"̵ PysZ6N .Jb22h`h&[Ic܎ͩH?ٓO;dgGuG뱃ϸ;Y\^`MM?mnE9#bJ _,ET|@bFWlmWh-xfgy3 [CA,N#dKBX(Y4V dc ŧy OuCT?Q$ #z6cŖJ>Xb/RBPBa/kְ+`2>cZl]Cu^NQnpU⳻ ^ xyugomIQq:"QD炞KU*%D3x+~>9p޵5q#&uȸ4nW!d7[9Nj_U⚢dR#?E"i lI3An|hto}xMcs]񻩹o0i;fxef r =!O|c&jك[&i#<l]3@Eox?6Kl2`6V.\#6W//g#7_JK uv'&D#Ua7:f. P;t&!tzrhR; Nپ&|}-9ZؠxgI"J9PPEȂAURZ)"8m0JDŽN鵢*rUxQ^Vm?ض ^W R=<׎s[j2)n11Ь^7A+߰ TdRw+ۢ/K06A fIT5Rl-8# .Y:p c8PE{m0/dZ>7M3Ek(z'dS"!xCfЌNWVuvw޾BMLιgy>͋VnZA+$o}#QrU-{ƍQ9iIxd"4ԓ*(Pg<Ūj|6F&-TBZR>H1,샰:kp%frD햲HWNj =* mbLdL&$DE.mȹB.އ,mze\ŽW޾vg7!hߵ{ 3cݸ>KFʉi#y02Kfsp44X"c#'b/2y^ٖ{f[EX D^kB%( N(eNPҎ/.VR| /$̝q<='mV3ˋYM9ۇz8vFT%\*9 lLe5(BU!X2snHҔ$e"=3-J2 K9,jBU-Kpk/⬯@kp.]Io)Ye+˒R^&Hu0{+ ro}[DU}{Q셯\ob;/ M&G(ϔ1b3E>"z3$g-?t $t; >'b{oƵ5Ӳq߇9SuJlk &bLSJsX Mgǜڔ2ƐF{do^+ԛ mSTf%2RcsZ&j&f= BT p`U V3PyF8t{O՝ &Qˋ1XQ'N1 *RиTJCh; Խp7(:JXIvHJg 5=Y+(M.v1y[Õʘ 3?.Rj r,,ϿZx\{pDpe{ AZSؚ#r/zg&kQfM|`* s<߽>q7_\w&& ,n5qmp$ (5Ftҏ'/}_|ʩ{i{!_DFG_M4ׯ.N_-_j4ς񛣸S_vZz&F%'k(5pz+}=ݛ>^݌]靟'7~8]x ξ$Fx1 'ӱt=K"fᯗUߜbpy=%c׶N57$7Yf,)&:Gb]φ@/*qwNku]_kB24\FjÆogX_}9fUz#J =6yxK-ӗ۽??avo}ӛo?᧷o۟U/<9prO&O!٧Mͯ7ʴ47oZ#lдsmz6Kʶ5">Z|&m?.։i? ӏtovzS7WMtUu=L4 b~: {q UI"+T4Q!q@`_ykhƸFZE%7O,/V,©欔uv&/p8݁6ؗ~طbd̠HQIQH}Y?_ʔO)/ۨ+dYcX1&B4D٩)"dl茺s3!Q;_&G+SMr& `3%9؞6Zلګģ b,Z*@lE}Xם^^b.j7V@l%&fzHAx7@,S2J``5E"2vȞu}8҃c2cŠKpVHÒk>[p03Fg-2 :PYKNUiIk4%KZ dz2vEy$_ M|ȽY]RI.D~5! 9V̓<}ݛKóO5~d-=Y׭7|c`<+zݬ|g{U‡֟jV5dv-0ゥU/;Գpj2! 7pDF@}Q ,8PE#A!m)>Èm+؋Ȃ*CA{/#bBBJD^锊eJBGXI݊L/kfECSz|_q75s祌ty)!/%y~ >:#lD6 mFߌ>~*m,ѣ83D. ȊX&-L8S{3L8]Tt5=@67&)䌂T{< QdȆcY$M(1-2(ؖB&(z9Y]Zƙs?Μ}Jw.1/>|,~SO(^3A'C%>[UW5jJg++&WzvϽϹ+\_MtpHHՃBX]=gDqzRWRRWuzԃAD ]wUcf(ޛ%49AI S{¼65ۛ.&MyHd[iUI8BTTh8 %[%FIWr(J+:%R<RWLӾJ:uURT;ur[RNc@,Q( $$UzYtQNbAɔ*9PHOn'>F' |>fr۩c>qs1?w (}:m/c [:w~qbkl@XQ!1 MRc@bt>h$Qrhhʪ-RC@'b8l:u$hJ~yI"54%,ērgeꈖU4ػK s) Վ $v8_4&Rm֚٢Z<ن|*D\]gj_Pְ֦]nfc8 t-/R!E1TrDUjI{ SI 3ya&-Zl0\6}lk5"5`!5AFyKQ._֯K뷯i(w"S[TDVdrL5e,!"1h$V0jYJ9:q *"%rE꒤/lZm9 5r6`3rE<2cx1l47?+6wϯql<ߺz.5Yatyo);][o+?i" E! 88A"2#^bEk.q Ic}Ū2e*ԓ~NS-]YȍjuIw+oL' -nt{ͻ<{ۻvev7 \vs+\^rmvv;9sĮ?6z tŻm%P(Z O((< pteGAٟmk)>&&?;>%Z ^ؗoATd""5|̾'# (JPB"[柭?e*DH@i7ҡ |jé8 ΦvKi0t u^uY†f_| eG?l(Fi1U%S{)g4ĘP|B2>]SdtJQF\T =ju%ʑ/:[I6N)-s'LWi4c_,TPXxP,6zƧ4?czio]|xό^!:c͔I)H)>Yc8L'p*ʵrN7ڶ ŬMBUM4  s;CQ& $*"o8'|b j7ӎ}Q[7Fm=nlS^ECV4)I@T^g5{g܌;]MX10CfPfI0d)R>9v@ ,ja3qaO'h:-Xǂc_D4 8 &OTj1de)l DmoEEN[m/p`e(`T87IU1Q$[&{)g;"~<'~ȸ8[f.S싋1.\ܤUjktH)r+2[R0 ]cH"\<. 6ӎcCwT$!ͳVs5>50ҙBg();dWf ÆKo5=}d;8Xe}.jgfө5e$z$ZScN.#j ;+P0Y& d3j@>m_ @H#)ekL'fFICMQ)FZfzDuhQח’y[H)*3?[ӗ[!O ݜк _/.v! &Zy ĆD6\h#@J`d,8Xn3E= [M!3w%@hlAT`K#DőSf EV.ɊLқBv)bi1vLGdfF쟜99ۤ|9li8h6IfVF+0h ym81Dfl!2ȊPL |V;  W EV.ȬGge201F)c@v`/>9.RJחOxx{?9$/(@1xykG^p4]\kf:kCƬ W=5) qIJT($Ғtفd{uVIZ"{L1xoEDX)Qj"46C^jƐ4iBˆ6!@" 5+Bl $j?HX (&yB# $Yb)0H0E*"\ @+ƩQݡKؗᛱ'C3q38ߜ_{|ȓ:#X/Q 5ze7<S]b٧P;cq"fC K(lF+mюy5)${I兟|e7(޸l0JFM E:,j.š†j19^@vZ ]1qZ{iז o#UiԻQ(IFM &"(ES% Jt)sPmc []13w<_G؞lo~BFM+~x?G]Jr@&94CvWofͯ|8·O$ppazJ>xFVe IyGg]9Y)#XYgE~ߞϼy;Go~X)>З2uxC}=f) M^ݖJcnog~P-?`gG&U77loF?hLcQa؇%V'#ѧt%j%O7GoS .v.^2Ӵ3c{ F3Q`UW7+tI C)A\AAlAmt&m}RV"a )_4i%"m0YA- bSJkEʢXP b+8clb )Q}*  B<8yC:&B GaٞMٖ `O_BL*]1*]z@ȼmiʹmQkPXYH $L2gܠvPkp;v{CLE; !jf4A8!9RV\*NY4P{ ) Kmb8{W Av rlĂI`A,d0m+Ӥ mA18 R:F\FO2Pdm+|(Yӑ$Ѻ$~3q[~7`京+:jү8q I^10b͙2Gk7-TQ:T9tфi 3zy8u1;hEE?ԮsJ:/UC3_c[k"-{`m)fl!)P24;̵-05 38#)kFHSȍ;J9LzeO%8A 5 2)^f iQ8;S{TnON<)7~UCsE)eO9I%Cm^I&'D޹'$dUZ];ty3>ҧ]|($v4MA /)B^XqhLcv ƺSbU^0~Nj ( 5SloYA_DgiZv }BZ%jbֵ'3+A[2 昵dKRу!5}6z7{6z+kwi7KMpcUąΜͿo..<Ԣ@lkD䌮G}(#_"11̮׷53cːL{*l/s^<&2Y_jNg2PSB\3pZKNlFA ]nqn;AѰfP$a$ZH<=.¬͟\?쏪E4K~_ݝLyGcGNѧUdqm1_Doߞ^:{SZ,B}uvK'J,CsM^+y T9~ݻ|x 3}3?'|q৻ٗH6/X >b\./V{{'PĪ6MO޵q$ۿB7#Q!x^b`?5~\K8Cc$IpD>uTMw-;,^cM-)[:ojF47b7Y,ck,XV뢣WmχVM6t+#ia#aEoI(4%ʃ?|0= cE5\TӜmvh />t˯oߝyOowߟ_:ͻ.뿾>yz2Z ?`6}p_M[j[4--G{!gt[hzTH 9O?ƆUkq rCEF罎{٬? =TY'f[2T6 Hx GqQ cf%QOɠ%h+LEn MC ]]c.=UI';pLy|ܶ.1kƭf&_gj3j/9Y.: XʕMT좍יO3eu&әC:SvBgf $V)4L)* JB/5fz Sjf'0rlV9y>r0pPg$m>}`V;p`)j[֙yXgyK CfrS:}|q>Y$rz_g}Rue5HJ.DLW_Hr-\rV*p[FTy+Wbuunq6`0`Xc&gۯLƗF3wTEőy,1Q + ^^Uؼܧ=DkU^C):FӸMcO1re~r>Qߗ5,)!u9KcX} 9!**p5*h/ Jd=]!]i{V]`%U;ꪠ5tUPZҕ1. \d;CWSUA+k}ߪ=,8< 誠՝ҪРa `;tUZ+tEh-<]w4t㆞Cf5]BN փcZb:M]x5|Y^aEYzMKRv,=lқ =s!*_:p*h:]{C>.\ ]B ]ꪠԪ=+iWl|Z]E䆣C 7iIoրJfŒh{yT] YS| $;`*T Dʡy*"(1-wrRs[,ZKolpskw ->>}f䰢N01IY&右7?~!ϩP^njuVFhV=$Uq7rdܮ0%Ceh0tqQ cfZ#)%h+LE'Z-)BJD"Jy b3d `8 Yxo?aa2}i5PYoTkNVUyki $^GןHOCr!,zV)RU|etv'r?]~rj4TUi]WPC|FZ֑ܲs5mI̚qiOי K-̜uVu,ʦh@DtvF'ҙ:L!);33Bz2$O޵SNG+uiS 9K^7h^jKe'0;r9I608J-<3u޵Bϵu8|,B){sKi8[KՕy U +պ1^}!*]-Ӓ^m|tOd,)S!XQG DYxeӉ<~,Mݝy7Dmx1kvp6ҝeс'12&,[ɌNeGqAò&& UIP,0#:= L tqNM[a!1ܥą`QQ-Β$J:&А* Ym58J-;Ý21*-jN%P*P""ّy`4&52CjAM >=qbZU*ԦȺM[aL/1Zd2]VۣzЀrc'FU*6%ީȀ(5jYmJ]rιBL CBK":dcƟӤW~/ak{z6V  Á/TWNt_D,LUEȈĊ)f#'>' ;w/Ozϕޜ]u4!-9WxtGnɿSC ؋ T ]LIf<њ>8d`1;y%kާgð\DzT#Zٳ,M1p!M!:dO%W[Wz,u $Q2R@0Nsp- &'GT hт ֵYhfA:/n.+PeğuY,/=-:YE@aU&E^߽005R3/vQQɓIK?|? ,"xO\bX""r0RcO9M1c^`~n\ԱLkx?߂gao“ n8%5-g[rJpѸp;<5ZJ.áNӹ?z8$zv3|869 }$FkFHf%W2R>?Nt}9#OG|"zCtj'l8^<>Ë3b?>ޗ?oȅw?v? 1&ᗇ`C{턮54Z:֜rw ԜrǸwe뽵 );qp8}79:욋x,A~N+D6u좥MIJ`KU'=]a4N'n!H|EPXzĭATUjˋou_H5a{E_w./L84!GQ#fׁW1`o4\;0M./H'";zjy*@{O%X_);,ϛ[ϫ ,ҁ6vE2w=sXhP/ 阖xgkiݑNq|I j`ʍKH)CC[i@fJ޽*xw1^yl`k76a8m}%v݌'[zKHhu63mvG]x'ϰv g{YI4mۻ.~r7Ds1I;Ӻ7ԺKfz{َN츇> WZ6ԲYrn[ozi󥱲Gk-t29 A܋8y2[pm[Ys5U}4&KHNqW:)f޷ Y0a ƽ /T *fA|ܐd巄^DQ+)nNZbeAFnArL tt An~/܆fw@>ȵXОQM1? 9($2%㤟f4r\S2Υ, Ճ)Ccr9 '\DJfFuajg Ee]{]i $-逞}hֽ&Wת}bpNIcz%0jֈR9NfEP= (jS2Qd3vA P3U٭dLcոP-+kmkn)^x)!" !WdEGtMB@!%#+BX$a5rv֨/ѦR=|<<X?ՈFTF5BQZ)5Of&q#X2K_ +0"gXE1a!eJgB )4h(/ޖR*kjֈWgH:^"YKՋ^Խ^6/ FÌ6F x "( *:d!xR H'ЋǢqDZs= "_U >Ak[f"j**U~’msF'\6NBU~X廨xedu 6W>rxmɁI)ɢ,5Jpn䱶39{:=TP/üfsr/0[ewU6rJ2tؐIkÆ3&$ ّ5F[MrB5ӒEIp/g+&G`J'OH9}`Y ) 0NܩnȪEkI)+̃,gSΎbcQTaȝḬV ,:NjF,6 U Fȁ^E|~-T{g*r0 z,1:$qf,,=8i2 ^Z1zQEqU)jI&-,P4JԀժ O喫A{%0m_vge >KnHWVʶc+!3A*CeZan, y^:eS)6L(}ド p2ГU 讠J~Jd4bǺ֠1!JR6AFkrR:*RfNZ-ykKbQQI M"2bRG%Ya܅>dA*a2`W]bA}fPX=+nLtk)S qq%aȵl0#~m 4<ݾ²M_JR :')Жq4.3=nj6.!g47E/bv#. .۞~N10'mϒN>'%Q;)^0?_Ntr~r:]7qr~Ý(c .>4 ug> 52 y2=l_aYWg%d}k+1=[}['*8*7@uĘ.bNޟe5?}CWdV-zHsτHyˡ)ULԗ.BWmbqܴU$CWTd%Rkwޗϧ8~k[n|mxSЂi679{dv:Oohi5omg;fbml.>2I˂t1 lwgڤwcճEbW-m杺>5+Qì-+t}Y4ώvTLֳJ`ۈ9iYˌɠtƃu:ee~~Yz}2Ό*{:sH{zAi i]!r3$+MYa&#sUJ UzWړBY6=b[v`|=m]R˷Ň E?xc|;)挅DvE|HT$bzFWWd)1$ykxh! 85)BGU%Hk,P@R$>a4sgFx6^֡sV=zqg#4E/(oeH+#̃yS^o0[AWl*vNq ER3R(+? c9zdc_IG"T!0:&֔kLgm%lt>LrwWrhxw_]_?믧=;NLV'ɶ]b&X$נZ%Q"1w]rH5<3e|i0~h5{m oGXAN?EEN2LAd'o:uDbk[ѩh>}-U㾶q_MȬk]Iy^OP!5UHYYeDZKEC*YTc;>Wr5:(s.U0pQ&FDĐ~ iI6|n_ '75>M<$s:{K#ٻ\]X~={Yv^o]'SU3ؙgW0 \Y+Sf%\}pQWzz`i6B{">w!~>"hiLs]m;CUv|Pܕ(ND~>;}iOJB=Nf/5eso}8i{ABUc\ 3RE\muً9[,d޴xM^ݧ;JmqScNOg<OIk#">xڵ PsR̶xOqoe1{ܼ˃z7'ޯy^Zܟ<=n(i-5Wu-(3ɵ$|u]UZ*mOis:2woZ'bxݟ~]^شsro7i7xxv:Ox :y˾3vwg^AIz'3-?;λ,9O3v)!=WɿOy[yqӵјWBR 7aOH p>C^А?'ni"wGZ?HۏϖKkQsYѿS^n79|a[eԿ^ ġL~;`I$N>ĵaɶFvdZoSiv^ =*m@V׳w|Jv\D>W%#>|/}]p>M.4CNcvj!zFj v1saԆ;6KϿ+1>/aƆ+K|(V^vfHin~~w{nb@ >IҔk/~drۤm-#f+c(ƕL)ȳM2_d<{|B.]TR^Xd"eRd[")JK.ݜ"׆jyC^t^HVxƫ^#eô?ce6~p[ɲ2yvyz7]/"] f^ Wwǁ;Y)~p%WW{w}ۃjtm2`&,By>(qxڲ~ϼ' .:5A' osFwRuN=f}LՖWzv[Ck=4CPч"8nIxL,W%;Ջ}2=+8{ڤ6l7"厚oY%X8AJj>v,55O.~=O[0_&b"_(VЌM2'd;Bx;F4iD=kzzvp6s;Mg KxFJQr\l\ K}PrߠS{\)gW\E<5+J^0\ݠJ鷦vzŤZk}?vr#l7ǽ% mȶ:m܂,NS7Z;l quz9iwq9f ycX.65=R8mj';Rsn~3n|wfyd׼J3щ7+D-:Fߦ1R R"*J=22] $s=yqO ځL_ }:6?-Zmםqi.jI/D-f+T G BTWeiֺkW1D>Z ʐDMim1ISJ\&}nR+B#x[oB96cZkK B.:K%93d.R*fjب̠p#m@,ժuQU 1dUuRSŲEr(BT-B\vm&dƪfp̪ Ťj%'N8_S AiÄ--Y,E@JhljɶJM*#: JXU@ߥ[Lƹ :Y)ZW8F'W•9!Q 0a`D{yO~fӐ& g]HR%MhiMFZ1hbdL!mX؆b7M-FBVk $ۢw^ƠL{}wcmCZjeXabX V"[KFvaD\yقd/2|FҨu1QJXdL!+Q)YC%}1֤vRlȃ P69j*(HHjsAuQVm),2[0@< 4@1eQ[]Np Fk!OQ3d*>v0pZdG &ʁc,M4sH2UqܪIqX$bOYGEƸZF6$}5lY b pK0S{dX-Ҹ1kP s1n r(EH.9msEΡd0(dPJiX ?`өL U!R%Bђ'X; $Ȅ+Y1XPPC/BY)i*"X.SK7; (&Jr eT|%r|DT ˕3fq3L ̃V;@ 1;h,r8bFM0 #/ d I\ئ[B`iv P bX[0q+XEevT p2M BP9 wF-%Ji`!^`yTϲo(Eydͯ(_u q*ҽ2ZK!QYHI(eIلBN`̋E/S* r`,I)F@BZ*Kx@Itt(+WE7 XJΘ D:0H6~[vňY6bV٣88_S+RvH')! ufVa+g.oVuW Y̷juZ'm@-d:/d@X8pie-pdnhAR0@p1;`G5:".aȃchgU"D&jZfT^eEť `ZƲEm@$6X/d r`Qx(nEe06 ی2YЩ֏D>ONXEiQXv 5Sb貒A";>l htkϦ}⋼kh".&jH-Ⱦ6 ^{@#B zv)m^AU@@P |Y]EpF ]J!vo1ᡔJr A# Da:ࠃ L 1A\;d,73ch"h54>p A B\1vƢ(Ψ 2KD(&Ў~ЃA2AC("6ZehX(FaEYE DEU@r"L4ؚ5'7`bl#z,a4@ЈP9sMeQ{CEXƂZzFUP,KЅkȑlUq֥d[i:8lE?1;fTs^ 7<2ds[khK&%\ E#)RCA@VGTi SYoIǿ৙Ŗa3fw}NűDjHJj`FQ2/S)FaV1Ue:@)W҃0VCvdXc46ICT >ᏞOW*ϕ+ȓ n]DNAv& \ة %1<)U,JCɈOZCoXR-2p?Qڰ(RV=+NF1@sJm,2AύLmWjRCtU gFPqQI<(bҧd:$@Z`;k$q N}`jo:k Be_j`wpWd87=5!bd , %8ÓT­1?..do7ji!$ĪXU?]< |٥`tB&JMpd$b CX7.LQZ Y~5 ߮HHTvBD8Ø@vQꅅߍ7dz-y5*]*u \dwW!˟}ȵNBH,5Uh)l!W;ٸq:ncSlm֦36 ¨^ D͞{ϳV'Jňz$|TD D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D @rBH g UŐ@`;}DBH2hO ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "^/ $ "`_ o I О@@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D^HqK"`^ dH %w9UHHi ,@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@Dzȭ5ڊ_+ <;.mݺ~74:/gwR2 Xz "y<g A2%=;m}p=Pчvq[_wxkzŶl:褐Cr<op.[8ؒz&4ҺÃ_ qqwXElX>X$6ȩqsVsal B}qm~\y6ɼ8:&zd؍!+ZH`8CSUd[.]IsUGJ+lxla-a75@GOt2jrSѷG?[LA<|6K4vd#ǫεst6z ^nno^-ԡy89MԏavOӋQWMk[oF4m:N@ uU@!il3TQmlmgnnh9ۛO7gvpfÛu{?i/^Nڙa2hoyt{~|~<^\4xk)]^5&\sySw7~~nn{ڦaxk}Z*^--竟uofjwibLS%wI^K~=H#oM3̍rg͍%r ,ߪ]{t⅜hcVȉVZCS4Đ.z=;?Ӯ#IFV+溳)N:_X7\fh #gOnѝ ;FeqsIQ$.༔")\*>E Ib|3[B)M<ڿ[hEɧpU뤌w<~KFˬ˥/B -H[ 6-kQFuȢ4#M5._OҰMe4{ߜ}R ryl)m9f?}ΖxTXԁI \JsM`p h0]C|:Ś #c3sGvZY,lB8rO |]2l6i~eӏ4n:\b8}ۉRWs&YJъV:IdeT9̜riĖNVȤ() $ bg[!cW|QT\(Y8b73gĎQ^K1O%j7cl%EmRViK\0RŦD #b3sG:B\. -OiM㸨).R\Ad?%O^ٌ*(Ed1$X@ Q\|x*wJ<4'a iUGT헬n^G+,$߶v ߩ .RRgm}oٶ;iO0'CధRw-kQ[=3}Om)}="љģVXvr2ATX5ʑN<(.:Gi#|}bɆ9:A =U4~q?/ oyScaZk_[/G;FXf"ڶ#df~ u3Ŏ/ǝv8*}ytd iXU\+V,Rt1ֈ}UTG.в |-goU0\"I`4ϼ T#nlϯ.MZͤV]PDyҪ6FYɔή}r3[0HD4䥅G1-Sym'zm1kU{AhߥGRWM6h 2ĵIC2sD݌v=) v:R+9Ɲo =x NDd{J+ G,VXluuѮ{I<'A8pNSVyJwWBP Tl<&ϭGO&a6 ~iUi~bIiq#wU;pwFE,v@cHZKG*9U+k UYigc^&Et]I&ptb.Ce쓴\k8my͉ٜa@:J[H:/WaLA_KȸeIn$ڬRx~F+[hHԑ4ީ落+ٟ+]\ľ'NOS>{^G]]f_55+TRDZrU.mQfctI tUJb.lTT.A垙A*uq3s\ouC&>)*q1[H SǫݓWP[+)A@FUh =:043)#E3ؾ>қ)(t~@=6>v!nn5r=n4+<,A]je(ŕa줯^@oƬ.5"Mqxdv)l =yScUA0Ċ15j3eCZLCgCشDIAbADBZ>Kq˭7^jı/ŇLC}i9栝QN))\6*L!Re1%G:$5ԛti+tq~UXE `(֨_,XRQo7F3o""" <\NrCN톿ӫb7Źg|KƉUR/J%翜8_]aP#7pRnldz:_ί4Z㵑vp M`\]49pTǓ2o(nSԀ|D[b[.g{FW! ~?  vfov<60ik#K(9o5IlS%J cWUb |PZgwU\?/ -#USn4^w6 IͶEa\AQmQ G޹z:eQm-TW3;l޳3i[1-2E&*Ҿ] ,_liE`$'" t^uMkOPUSy11`|3D97ӠK Q%08=a__MK[N-lgaćFGeRAHV_b a,0rLR@^+J/q^ek7-&Z.&?o".{]+1ҡNS/tʘiMsJL]F9e g9-r͙A6ƒo;-\YN)W>YTX J[o3 3©V A` H( ¹.kͻ$[I8OxR>qUvGSM=4Bd|Ymx~gPͅ^JBzA3M&Pk`pP+{@ịUh44 6`\ hN`)A8ASPy[#q ~g}r<DZ)ޫoɽbzMQ+β|)Rpgez󼂘ɼyKpÑO3P^lpPf!Gpcv;f96 gZ3bED4^y-L%LG8Ό;#Ԣt*. 7+#ε xPLTLGCh7)h)n[EĖPzf!% 3Bs43[#)J_]OBӾ-^B,V%~C>kfZ ë/W9eDW 5\ϫMa!2e9H(H,60v{ SD [f%Y wjP)Ma!@QQ$E`Fǭ7LL<#Rȅ>S(jh#$)WʆhR[EhX)ı(aOpۧZ"q"evcE8`WsK砧xMF.4{oO?~p~%b}O c`URZ@DK̐@)CP,Ì[icpc% \JlB OYc*5 %Jy>^jF,+!H')#)y۽WRE"f.]lv&^x|{6RZ{mF:h1y 8%i^ xj)CE|g|OG#6Lrj ,cEjc*x R2"sLYq!(!`XmPﺯt;"E Rr@)"Q* `pL^A$ ҂m=W+(\A^\X&r B?Ņ`gcE'r"(´2x=k%:߿](`d)Sg I*er}0AٔGP)a"]AZHkksp2C63g; JY>8^W"RDt\q6\B $c]ָ$ 5@?'LwLK~GP!(b;drɑc  ^ )(W{XA ^{@0! H ˥CRPy0'1f3 rccͮMHQ=UDRN0 tr49}C$OX+.a 9*9R꧳+_;o&"8O 3bA..z> "`Oq} fPSK\4-]55CYû,2a`(|8NٻWuNV Z괓M655;"SI s`?G.zRY;‡8V6/U/ ۙ]_;ߝ}.<{s~}9&o`8ި-7&Mb1m5 M=i)u=h > >x.>As J?d?M~3o{EJ2^nV=Y i |_A̯^%UKϨrWT! 0z@/._=<>fKVHhybaFivʗ&e8 ѓ(y#2s;hƆ*fX !:A%s(~pa !;52I7Lכ*·n,_ӟ*E!ߨ;R;} Qe-]9ZQ!rFBB,:2H-hT+]\: mu㞵b$8P9Xt9n)(,[');Tv8YuKwɼxcDx;xMy ! )ɤAI 9*#u `ս)l#2h ǣE{Pbއmj*Ԉ<3 KfLcz=En0.a_`x sZD)3ln%9ӊ՝UUveը][5jgV |<*D i碊3$hLincWŕU  RId&|,iL"-HFl8#3sj*Xde-0bP{Đ}i!=rWZ[=l"6-}/Hҟ5 ZOK$8+Q5=t"byrvq>}zzX*FniDEDY :|k-gd-ga-E9VY |Xy1N\ Uo#c"2D&=XL`4j5\!XX)V'+Ym!sUԙ!vΞWQ0sUOBכJ;m%j?w_OVll$q8~oDCC&Z2(&$+bQ'@NhB٤#/d,KBNƢcd-%oYUM{ znZf/ml-kuXYLny=<ܞp{nc D;ܞp{nc =Z&Tyf4ZuÛh%3wMWvhF)8@p{ny=<ܞp{ny=15 1p{ny=<ܞp{ny=<ܞp{n݅R-niYK,wVM ]t\-T4 NEq&&01rdQeH(>ocvw}NTGz^dpQLHr1]vPmF $+[Vg~p%/O~TWm/o4lq?bM.|ZSOw~0ڲm\տ[jӝ]Ԏ~p.W]ek۔cT!;狣eMFI ='|ԉrI& UĨ;/%+f!+2Pa Xfd!4lHU\U+*#ӯk< =A99bGbjl/Kϯ ]Хnb%bkLDip}6OW 23@:aaJWΘvc?yeZ(Kutyefz)-VG~_2la5b7xGmJSL`}l5M_}b5oU[!F}|kἣ%W;g|_c+Wh_OvopwmByS߁xz-87j;IoHW'&(,Ch;Pe(Ґz~4;0#0lBrIGV9y)s5jLU'x +.G|-֢|hCj .){v|bKW iA~sݪ{˥arU/jz|7#a).ʣ jODTF*j`J@n* )i6Jpur.!VĔ1r1Ύyl9;C҃19s|cɱ7L }gHVm0xRZ:rskLUB e}J:K } <κUgq6NPK${幤A+f4leVA;P%SJqb GbYC(FifC^ Pӯ)2\gbY!xG]K Z3EV1%=b ͘AQt}:Ic?RF6%T/\̔Pi ZTuRaB2IDAf]cVZ ;RN6^ZcF:JQP};ѳ&i cEges&mvDnMxKC/2uy݊ r@mC (OQІm&UL"áb WcxdlJ2:<:Whkӣ eZ2DdF4+Ec **[XUJu2oŪRjnUj 5*JF8tZøD49w|];ԉqPnS]jF s$6vO/zVܳ_`oM@ޡ",1䢱$ZK XT 9+Ѣ#s[ f$EEiO)e0ۜ U\RSR+n# INXmOke'mڶ<\uz ^:*;GSumȎmbus8C`(syo f'ŵ8H0]_] IS5%mgu=\{x%⺜M;We6x3(f޿e[c:,vJ>ZEӍ]N܈ERGMqr$w/=~xy޳;]ѷ,P馷_c/%yw7q{qW _`կ(8s98_Am _ߺoŮCA~A6ě_gJ}s6毴6d!(AA%q*/j#>oC ރ69P }M=\S jǠm;6U`2iɍv_JA(tn}[o7j@ZЍ,O;?|T?G,NՊHM!e+LT%pT Y̭pFgq88ɻ1M2*Z#YK5&yRNd l)gq[ {NGtgt|#lϑHt%ѩZj_gcPs:Ҍ1ME3(Q{Rqt)6IQI0;WuP:Z^|.WMխm8D?a @z|L6=NL:9V.Q𪪜d;bp3! Vpm`jÙcB(xlA*ed K錈ݪi(X$i[/y*.θ.\|aR:4NH:)YgBn,(El(2ƜcVYx\ vs'Zf:mcwXHkz&DsȒ9`j 7V#Ĺմ:VjZm[Mapuj@i=~ .9@`:WH&&MҁyDm _5^+Oˊ#-=H0{ÜkTK5KP&)U e`!ud+eDZL` Z16WmSmјo&J\uޕV="I&?/'9 駸e憅}\HVGwrS_/Om8FC2ܳRcK&vA*vLwptsѪr\@ B7xW0dIMۓKl8A։MLőAQJE[.c_tZrP:7G@Cr8xF]!jEjohN30t1ǁ#v֤_ǭK[bܱ[r{zB@<5&XƣPMk:k$bZ_) * b >!#R[~{3^9'v"]K}V2DM`N33HeKcw}{)vq|a\^RzG!C*)dxq"fF^F?F [8MΦv kdZ,paݿxF~Lu>0x7y_U-Hq؏&sh -bٺ uz$~ [$F,#SƞFF;7ԬvZ|d+%RZ*2~5kL̸Z&u2d助*vuQ,ɉ6뱮ף wo<|{N'1`|3D97ӠKaZFK}dfE+NE۰//ڦ#ͿHXk!`+Qritj5K #FIʕY" t3Ŷ d6>wtv­U15C)ZܼuQ,"a4y/PÛfq-  ^עwҕೃ4_6'7'd 7xۤhtmY)xJs&P7ozb~ʻo^t{Xضv6=v) o4_x(ˁi8 Bu4Λ;mw a.47M bs(O[,Q1k `MJGƃ:կnPVa坤mFa <)ưj2}<ջ|Nupdg7bL>aEƛ=f%"17˷×,_zЧ(is2ǣx`5);"EvӸ1Y:Nj& =Ycޯ a1pFiz+eSy/knvI,۟m l>m3__pkhi:ےq̚HTk3\ckp1dT ט3\c 15f 3\ckp1K ט3\ckp15f ט3\ckp15f ט5\u;(84H9 n#-@u)yi3pe\W#sJ5ʞ_+A98-k;bI]|zA{ٝ+Ow;MA_BD),Ԭv&W~\Vi"n͈oɸq,}{R3};Nnzv7 ՌNäiQv^V>B/..f_QC\6꟏0in-T{=r7]6BhaELksre8? ]|4\xgӯ#5Jvw_ӟ~oD>*bk ʡys_wQV/vh5˖]=mү1L0Uv gq޿2ʼTO?er8BO7'Ԃ"d.ˍdVnku&k9L%āߨ#|fڟ4p&&g4;Due\/׃Uܼ͟AR1bl~UZ~3oFh7fߌ~3oFh7fߌ~3HŦ  ~f I_,q˰ÿft_¢7g1 72b\̘s1c.fŌ13b\̘s1ar-Q߭·̾9XyvÂK߂>}}+Cnf6mF=kx`\1)tc 严j؂v.uIB I/(gUL%ckia5=-jUf=sn7/|Ժۘ?]t+I'v2Y녑و),Р$ՆB̼0o} ;quoJ#kәX_08^jܪŖmoɾ hAbVϸc0.aa?khd;XI ,٪ĪQOeըjԓY5s#`5%EMLN;Um!Ae2@1OkӪbD9DŽRnh_ȹQjEwaREbhl[1jRh4$KwvH8+A#wO5`ܾ fTi_97 vXuy7|8Y1=qb̍0bG:tU>} vrَF쑢h֢} E*ֆRT+Q pD}M( ¹ks™GI/Mw6lO19w^ri@'-+;1YcU?ӧs~H#D<`C6r-5`ERv؝68J>+7+L샏ӭLڮ]x 6^喗B}?A nsAtXUʆnj֯nQO>(cnKٟMD5o5|g2 ƕB/oJTOACb36C+`5 +~<*jv>m0Rap$%;՚mpш(cB.%$f#DKv(<&Z>眳.ҵ"P6 fWiߥY#mSYU42Xb)WEx`OD(o"z&[h\j˄yl{xZ=b R H#%b`ZGj42 DPm%E3mKY{^O/(bN/pӋ9Iwrc_ؒO"VK0u0;`L!E f_k=oѵJM:Ε6 ў>* kt0ҹ@#׈i# }PMPSK,2cvؙm dZ"La}3zV(Rms^@{R2"sLڸlqb,| TxGDJNUH$Әfi%p0AP8@2AZZc0ά,9B ?ͅ`gR N^yE4Qie{-J,ܠ+w*`d)Sg ԓt{\YI%X` H)a"͌.#Ơ9ƀpb"ɭY?C;S~~gqﱽVΙ.1Xs)P:yDNäYFarf{ vf\^t4oJS~-)eCP6(b5cl+0s `;RLݞù_' B_wwͿJ`! H ˥CRNe*0+8NJ| :_bͼŅ)I=J0ͧŬjDcg=R)hfALS+` k9B ŕ>եxr3]z|b:9.*g7^rF̹7Y5TvQKmtR"lpapZK[bΗ5C7כY\,G!%+ K#F1AyorW4Y*AwtrY38] 9,#A~"y.=krT uGĿzuzgOG'_~>zq7>yurF`l {HrcmB}o=:PyN" HHk4%KBa5!g,nOo]C uH7y4ykB-y6\;>Rngch|9Zi킬ոчV'ҮUׯIPCY~>YֈY7׎wS{*grO~y`#Q=MDT/pJ7uP}!<Ti(hxHP$Aq{J++cJ++",D!Dʵl ))EzS*( c&n+2}k}Hh{=]_#`V2|3))#QL)fIaq6o&].7KFh+ K9&2 [|¤{'g|x> g h7&F^C$%<\QЖ Gt=GxEI2eC1,C&RfnK!Bh=xKF.ĹgqҝS:xϾr}4ol'r,'^3 ;S=[CU+ֺ 1=_}$꤄C>|iƞ8;逽RɘD VnY ^&E={o|+{Ń`PkRA1.gTHZSa*B HݲYJ)!EI,DH]%eL.(1{L C"n{GnIrt9:FĿtpaEF< A|O *-πB16WQ 'c#O#m<Ǝ'jg?QI[Kii4lۼ3=ۿݽw(7bc DZVzc:yV& +I7o9?vϝ]>.TrX MwdxiQ%ڢc2 QբDȋs̘PDD赐x)2Zt%N1-COu̥BA%RV'm'3q6W ;ӌmPu½bRڶ?]3iWg'4~8_~3k2)'gZw41GL`"`tVRmKӨ p7Tc'4)(*lj1)Jm'@\DtصkNW܏ag{)=V)_0Z`TO:ZbIt#eQDc .(e4%b E"`63tb pTtXOVFI % (M6Yg /hN*JcgƊ52QVirm툰lR}hx'F+v'߲bCPjƐt֌=9p }`!b d :|yWБ)Ɣ~zt7=d젢;Ӑmgv<;K}? k8|JӞ,|r@HT?CBlX&-]:Kx꾪jz8N/$R+ Nڟ/Nf%'/ w?xqIHw_=`n!MZ0vQ i7) ͒է BvUzhKGņ9]trU{_ f7?[<`1P_s~20Y隹}\>ϟ Xye]z>`\ ,'z`@.ޢ_4C[bcҁvdҐGli&Ԙ 4ɪ3Roed+_m-> V+AA:jRKYbK& YKUQ)j'1Pnk}e,E@_1%FSX&2Aʳ]gJ 8Cb_B[Cxdm+Q3` t$[JMco=1g{cG̿q/ m3I1u'"4a$3)*H 9MDTLjI(<ӥVsMMˏM`-@}dk\UBm+8(q~6Ǎ^(r 6  퐢(h%"dI@]KK5?:y*}:U=v v}`J`g[ܭ3E~3[zh=H{(w?f~/.byuਈFkd4Jp=~,VEZ^uvrծ706N.=MJҞKM~)F0 m؎KM5B@2%MS{&;&8oi;]JR5 oQ%Y1.+).FBku]퐊Lݥ {V8_f|%2NѦdtݜJu[>c1X*[g7|vzv3;swtywf{C>3I7]ɝNs/ r0WmD7smzo~ύƣfƹg#8b/N4.yH?JnC1m-6 i9*u41$U\%J cH/0&mxOJk gj7ʫtemչD\yvn³|/N5 œ}*?7oLSg~i$!z3H*F^]}yvWl6T>Q ߗT|?JXAfm<`:ʀ?_k\W'ֳ{r:=%8\?V }gi6VcPo@beaU3D3elJ @R>f ]tF ֚}wkqT[6U`kfmѬUZ:ȵYe]wd.P-jN36ۿhmv~{<+8_N>q>{fooS>2b:g_z.n۶XCDaeMinsls73k`82o:Ȫx1HQlF6Ehr?gU%I%>T|z9`rd2BLdV H1,^yVg Ĭ6V@Rz9Yٻ6r$Ww*0ٛff n,ю-9WlX-Y۶iI5E٬SE򩐴D6< ȈqM(="b€EuLԙ8wS]B/K;;Lyfyct"'})Rܾ*WT|a,g < = r+tS`PVfy`4> x*)@gy,rIٓK>=<rTcR)!; bvRVh*Sz 1S3Q`l1hr(k\;F*l{H1;F'$ =?_Y:|2fޝp%pm%jSB/O+;5k9T2쮴d2Xoc/0$3* TDkRd`2ۖK2L K-Kr:R$Bܙ8wk8q,[(}S6ݧsn+zd>ޜX|n- }81']O[p#M/ eRWv=B}nRޜq/ ~E<^*yQ[Ĭϛ燒Uw1lnz1{GqBO r>A8Υdq134ix S71Yv,}CͲ#s&;bB"EABNqa)|'GB7zv`GiSo>u-ᗏORͳ-Pu\%3UZjȮ5*fQ>b C(2 ~QQ0@^a=,w检{ʲxw9+Z}0ђ$DA-e#uI6R0 X'u|ML Ih|9>S^DRnQXx8@XLJv}M+qƝbZ)ǧtN-u8qN\c ڈWMqC~7Ao6"۫VUs4&)MR Q?,9XTm!R"\L,2, 6O&dgIkee=0*^LǺҙ8{^ AzM9MIg",>il=1o-YՒ]+j8v]xW4&76CfiB!5E[*&$nQETPOcB x)K;谤^, &^4JI)cR:5^^9#utlS#е?~V g *kŷa 쿝W,[ɮQгkT)kkXmu{qG.9*hM07Z@Ч{Խ nӤ='ؙgm" |>H)b0" [e%i51 !#qЫy~H:`w}ᢚfgk٤~4R(ڇa$`1ׇ3:G1!Y}pʠ^_%.+Kb($(SlC#bxJPz"FDNl0ƪrɓ̖d_eYj s}L|w3|҆ rC<7 z]<-5g5e$3PcߋBp5i: i>G|ųDgVI>ʐ1d]`]\kYK< =>:yWgQ:iX8=_CkH#ZKWg5lU> _^e7f!$=/!y JsFuWۆ6s0 VœQ9Wc(6bֲ~F=ͣZG$om[:֌̚^of}aE<6Gb]ǣ@O g:V겓Z앒L}ԑڰMy1eUuXx~ak+u[*u<'1l8?{oǟyw~zN*|/߽:T鍶"!{h^wӨlWMs{˦5=׋]\]-#ԧuhi o_t~Rsuj՜u:h@˯جgtURT;*QkB1pw حl16QG'Q*(erS"B+P{RT2R7R2 7.SG6i!F{oͺ!BX@vh "*F}yT4ցh` S;lLw+m:wA͸c;5}ϒ>d{e¼&\>ϑK?Np2!*.8c4i6)Ʀ5i 6{eBЏwp>0HǟPgCЈR&a" R@ҔL*(QQ{R?(jp]hOGn$rٹ 3@*˓W]͢Z45o{uI=zQi\6'[|.գee˴Auo t/=z&etgxO3/]vua#:5Q&h瓠>JMϦrT2h6.Tb)B(JdmN"ʰJG􀨋Ĥ&w 9- F>gFH)pҘ,UDԅd(1x9uٙ8{nm!1]&GM.Xܖ'aV:痻Mx|k[!o<ߣ՝-dܗY>kwiqQb޿3O>g{ SiowlPk?T n_K7һMtUKø{4 d%lݿ]{wW` ٣浒d=ci/;ړaZٱ`{Qlx:7 aDqy57}3xqh<ۂ ;~VzM,u۳zu.9oX$-т>N{狒 Qh|jJg莯9hn -iS>D7 %JA67&,rI$)Mzpuk>(߁l)>>!ol:ZQ2!AR.2`T"W 0B/6 N6ұ-w+oJ2eYI@itR:VĹ[QIm&!\l26/RȽ1twVy/m6~U%YBdyq&M(" 1:PMON57,PQIXJ[$+}'6R][#?XgX3[̸-TBG;oip͛4N^ OçQ`cLZJ`g\L8EZD Hn/k0Tg/bVljαaI '( Ew%"`3qq<5PvgV[wluo{HOz(i<tI*PYkՌ|r)ZtҝÚ97VJ^tPɩ092.$;Bv&u+dXƏb;chz[ĝ2X t.Xc@ZIX8&NA:> 90V*N-G)þ1c-}q 5QڙH1.; :#CkmHЧ@c!$n_r %)H(R,A5%JؖřLMwUSLh:bFR#f0P;Y6uӒu*e;xg"JREf1 VQP.zu:0.A@!_܄_XL;IĶVpkēx RK2'sX1" /B%45|Tl%"1+PR] ͫVBSmI[St0S'EN;pqaO4oQF;nR$e_W&У}+9 '9Lj2i4L(,v%CsD'ďmrڲ W {p?RG*WΓq"'$3`SU #2#up<11W]2ᯝƖgTFxb*y V3!9Ƃ4 89Fs $׌3e EEԈ K"WڌZ[•,I#I&g*"'`ya;+&-l Ir?kOoK =5*p ÒR$(A:jY͂¡"$,:(igE[G} ?o5g#`"J@S xB`B+AO+jZ~:.NTe[959;)#]Jt42Ed65ppy3*"8ctتMtoZ &YPRPQ"P J+SQT*XMޙH]p6Ծr+"([gl{(RuO6y!Û̓&Іr!DO!Z< f4QAmN2I:(gs0UiȺa [AO'yb0XMɜYDDC%$t&lbi>м'_Cxm9_ةiKr;o/0yw+пh|~?ǣLi~ڇ7?KZMfQOJP3N-ӟ", 碇L7[g}Kdxl2J%sj箲жrB͜t[vg`3sCXmY=4.n}%DVAF!ReJ"+PgȈ`u /zV:7Wܽ< j3mQ=C&z鉁)\sMR(ˮ^,͑eE/V,]É>c0-P~d@~|ˑ2<Ɔq|ml\ `9I-lPAkm̳x4N06hd" $%8@LBUV?0$&-f()Tm HʋݹPkmQcp̧f+nw㕲e9ЯrB&ƅm`*Qf"<8$b\vܭ|+ٶv|<ߟ/}ݕy*'klM%TΕAH!XYBs&Q9S4OJ1d ˹7i ĵ=`KGhBlxDMn?@{?7W}ːnB?-k ~i%q0}x^WH)_Yʂ\k-ku[=K>t$'_8҄t)j^D9sbh DpxSjjc#s簴[>Gz>A}<]vޟt;CL|s̢B u+5QD@u("8i!)IਐhH¦#l#d]޶c%ύ;gDw)/$* VkBFc/:=8Ȟpuj鯮lHb; _վ[\vrDxjoV'=EtaOq%6Y֖y񵌎$AU4)xFa\JyyҘ)"B3M5"IfM̄Z /Ɩiђ)y2\?z-l \89#`2#\rl$ZG Q&DgֱYzg+uZ!(*Ć d!T‰gn]n2qFƃ pL죌tC2QX.Tl&γsww?7Hd4oF$( XE2ǙBqL(Y8˵FoBLTNRQT! sۜˆy5чg[4 .n*4~:)}{7䔣&??L72{/Gǽ{G:ҿ s0Zi'~dj/?}4IhF8MrzI'gY +OS C]}4;_:-H(Z% So.&%ͦuF Ѥ([*U@ϱ?1>S{_s" '\C] Cf+^ UP8ջfv/Z-^UOJ)`_YjUh^p˺5[\ pAFK1^q%] $D(t qzOqܫZ95X^*/`prKIKXVyXOG*wwVug^f Uf~u/{ZC̿߷/gtx4`jo0>J}ur2hs;'1Kc}juu>bqhvgN~~$uszv4A `-}~WS] ÃX_`v~Ci/ׇ\xS7@{e !*lc̗D{]o'p|48o/;AD ?^Jg:r?w_ 24.>LI}3m7GgiOC{0L8T4gA#ڤe{K]ݜ3}8vҸ4Gd5y=FLO*^(N̦&.q=QI*zѻXҢeslb4Uสu`}EOFtT.V g1; SUg'Zַcmn/ upsُM>ホd<#zi=xYb(rp]5Xx1|}O"LD\ |%O967'Ȏ{YfϵzlCkRc0\X``[4) 1Ix@ibRZ񰵲D6-#׾yjna1_C>Tp:=ly$n^>@{2w꤆&_&{0 ^Iu&t.( &BD6hn)F}8c"A!wfG3GGiy K~ir'&NVx<O`yߎP?٧x/qBRw͏J; |uD]Ԓ)QYEz^[g_~A^vv[Ѕ Bx OD\Bۦ˸ku&}yBMomv?^N5ߛ NjvwRLOI|ohI%t{=l;Oh7:_P{9[@&)9CTAGb 6 &5;%Px(%XҩԠ&9&TDttVUv{oڪ-!D@/pN>"+{~Nv(vFgAؙ^v"M.dzwI-doJW%hA^;}eEs?i]"-!1c ).Ak,l}Z fpd#r;;+9?$ _0ec^*7ߏ *Q!=$Dc|1`_&C"L>E >}27A,$R Lv x46&@N۪`j~}>@tLEuV\YwPE dNuMW2-PV9-z7A+*icL2Yp 51 Fr Bi :-e4"+VIr!#:4h+1S kmD,;cKRwJ)kwdZ?;Y\Mz+ix1>PbX#"4]y9%,b`1_j ,~Qc0ѿ?n\WSa7X—s^5=-̛ gƿ*]I@ N ~.R u^ZtS:'36GtoÒUF tBZ;?0O7}n6ɬ'ju*:QM貏=>އސ,VWŋdG[kbGiV<۷f!9z{;BzNuve=-Zkه~_⃗d$&s~O|m5^ZEmE5d'4i;G Bwtiy559̺?9m`]źMN>,zz5tC;NW ߌ~Ix:7ԗNW<-0ɣ Ot^=c<E~sV \*ބxMqWm4vytM']qrԣ+w6ʐ,|JY*'v^{R*QʱFJQzNf4}X@aHA;o3ET."'u (3ؒ)Q;M^u3؋zLՈz䬇2nztW 9LOdY3aN^FIxHOb |ȇ1J:^z'nLh,5ݺgϟgzK'AV|g°2O]IX_S\~ ֳV/8>о]ڗ[߳9>㬖<UʬAyfS+`"gO/GEjZ;OyQM?rFV韏F;״5~~T-x,B׶RvR4SJMF~4ӳZZMn^6N8$B ;t& 좍eULFE ?z}-Gn}H۱P<$=-=,34@΀EO쿧,8K٘tl2N"):{B:=7>5Xj^%|?UTJo#NΪGh2q\wFEmϟϓ_&g v]Qqo{2gׂtn%_X+$}qH0j|c>&0zvI]xJעsQcl<`;Ci^hW`+"{Np{z'mƒ8OXy«d+͞t{FⰛ=:vo;^\JJN0 [=nx_&CK٣@2Ȣ#Q[ ψW!-Ճ.ъBzWKG3@rB (djm)bm)Ȉ]EX &k[Ld"+ (mg,@XQM~EFuE0!n&'{-G(ǿ/yh[Ɗh1U%S{9bP Dl etJ%FQώ9sRT!*ʾ RZHͦYc36}mjl ` ^{XG`;`)qM I9RDJC6,qS]tUO<03mCEH(򙖜eN)^rM}Խ-x) };B lVl `JٮJ[ ,@(*EQMuQD&ĵБKEƴe6eux*O۩ALE(ms.Jgck\׫b[*|**aR/1",DDꥦ0+Dzb"*X̧z3- 횖6Cn=_ؗEiS*c1>ZYK0X`"k"1ŵq TQ͍QmQ'(ߎ${;AwיgV.5aɄik܄}J$k$# 0Ѱ #.c0e4&y*5`^h=sYt#J~Rh0=Ulwu_{_|#~}<O''ox4Ѡn4Lר󭛢Kba߃h;7'1}R$K ^'o踇``ig.dGi4ɤ?7WFudf`.n<3< yUg>M? w0aЩ@*ާxpj\"0Χs\&d@,uy8Ota'ݷ"D>IC@j冷8ʟGw˰ERQh1u. OӼd/Kq:̏ >䷾ng҆Ԃ1"E甂14o_,vT8ܝl߹)ZSV/\9MܤMT A :sUy>JhSRv̻jIqI(VNC_v(FN̕+iGItxc*qDqL~.~q8@w/Xܑ.hv^&(˟w%`  ٖI(MȺ&eޡprOm7Ҋ./%n7 LRYbV=rgA=]`R*'y`xPeؘ̻w=.쉈&N(3W휠{)=;I3D97S Q%{0i{ȁd<^:Xk!`+Q-4:lo5 %#$a ā.f;Uf1xP-^d-YoP/y\d(Im3E<tJTbv10CXx-c.r64I0jL: f!P{6 8DtmeqzT碥`A DhpKbƀEIDb- F <\a,Loqx=VN {SZ9U>*S쥓!C\Y:PSX UJn>L_Ё{sSk;vvKk6i5mWVF}%g6#Xg.9J>oəŘ/.S⥒2ӟGzJ{ɱ6$47gZr9H*A^Ry.E^(%->c431\8 Ō93˭:F Q,K2>{x#>|]d]oR+eR1EYJaw10ve|9q\~N˟ha#x7'ͷ-lD$y6:Z-hK`XKTϵ< jy>u:\П?'|%f4ݟu.9|]:KOVg߮`Er^tS6 2veV-!ܨZ>(hKg[M޷Ti>c e & %a+&9NxQ >Iwqȑ̏2g|;AquGaMLjkR4`S 5`c DY%7Kgs|#gPqXCBĀjHDAwsRmj/lB)RFC7o{]Kc- !uD8F[XMP>`Ō#}d:DH|C O )Ug&XN,( .p-T i)4H8%# w63j37TXmQxdà](y墶T0@>X:cɃE[@m"xxz[Y =-|oJEAAXHDtt ְȉb8ZZdSg0?j"OSOv@P 8^?3rԃA$06&0콰zT[-BDATHD[J}sR??Xo~.+dv] !)CRԴe׷ o@dCR lt MFs~f#zDQ1J22@<iʼ2A`՚qZ7e*04>:}g=_arV[+8 [s@S*bL[ {ɑϣ0ӐwPؤѠz3 ڈ/@x} ߂n %Ȗ|j1UOaB}PW[Dv};Y}|}I;. oq?a\bW3z S]?y^XubÙ4}o}p HDd%SOsvX2ɩ0leMr&+ {"@|C4iN/7i*24g*OjTw{Et4JhSRvL[<+ ¯6$ŏ]L`'b`4OJ\V{䛛D羌7=?naGW*gwt2NOqb;/tY29wm$# G]  7OkIʶK5(i!N_U}Nuu.e+ԩ} ČQ=pj jG5\(v,6osÍt.Gϵ6X͛%??(BfRʰLNx.0Ud|n44uew^;K{DΉxS62 ^R"HV {Nnmd@ ;KJ+yW G/@<]l8fA#uYH&p,QY *J\O5vuJWO8N.]M֭%5dMȿ]\".BvS-ƒUgvp#miVOa"+ ~yv >߶'u73W}x͊68) IW\GCbw nA/^{I[[K "8e Yc)oIs`I$sQǢ>vMoyz'0?]g^ (h C95#r4Bd"gڵZWQ! 3M_U}xԅn>jrV?}}>է&_Q:(CHc[G$ApJ&0jZknT&gRȅͷOPXH.Rg&EKF(T۲֝g=9 =:dsg $kxE,Y**uf<ɵ+cє+ Qqf E ?Tߡ+xL~qŹ=C*z-*OHaJ*zC AYwbʉQt E(wdM-Zfrӡ#yዽgfa&=k$x: I!"Y6@,SxN#5z{;hý";{}#Ybݬy$~c6 eLf]eJf[Y{ /005L` r7qaQQ!#L `BX g+r 'N@gcm1{3 > HltO&!KDȅN>ȥ""0$ `[q\J8#Y$)N HJBRGsN&/*Xp1 +uDBhG g-N$~bfp \Ѽ oǾ43|gg' IrRX-Q`IOP ^s(g" \YɃmJIĬ'?y$wR~6;CKi;{T$,0D_ϝe *R `G~RCGhBob?]^q[H}Cr4t*͌kiGY"8g_t?=|n E\eE>8=9jlI-]45ÚYÛ,.w(!̳wz:yd5VljiٹsHnXpsWc_- =VVYS5{UYP=.~ۻoO~mݛ>ye_N> : LǍᨩ ݅ܧZgn5?'lݴai܈-ֺQ]`!7B|\|JB ~f~쏫iF':w-> ;0 v |]E٫EgT!1n@.6mKre4'V%vRQJ\0iڪ㠬>P 5PL*jc1J :oT|;D|'Ok kz˕A["9 4)ZCr3{.15dl [B['<wٝwuκjz̴:ah`nQ9qiTUH'nQ$SFZZD*R!7ǀr*m\>\ESrT -櫥,W~fۨ|7ɋ>v:*ONz?Y#kJ9}jC_wnm*^6Ft_e?d>,]Yr$9N~HJH#[$@l^ͮMUMu [..i*?>]jVT m]f׫lE&h38iɦ_e˚ӥoJ^·_p1A sDS94ɤ9`j򑲕6^]3VrJ.д}Z9h4E@ G ab!&EmVN9%1}8pVNؼSL޷h$ ۦY6ē`;6]m%=%G6WJ`(:=~YChO}j\g,r=3b&^BMoc~!yraKdv-ޅ}KjNwLc0#f^3_mۡ0ǞY.԰\[H„$#7XMim N[S?I<0+ZI 55'E۹|X8eK١J1 O[zQF.hkZH)BvfY6$q/I#.3bx o5)-AmJ2bP.߻bXYw&ڶoХ2"Mߦ_L|(Me"]|4iazaH|žVhe}<,ب_ Lġ.Ǎ3n,䡫'^|Izaӻq%Kn}֡\$b=@`nȗ[h\ S0 \ͼ"h$wȭ]m#*yTȦ?W˃O;bl|t-'˩ 5&y$=dXL{\ce۳/vQq W3Ot~=^=ƼJxJ~ۍmqX=yGO8wNR |p~ a1;qtV'y]R:;.m;@j%]roYm{Itq).՞\hcArUc7qCe[L\UMrU%@{r!.W"9*W*\)RrǕV\ 0rXT0s1R1+QK6+UV\ȡ \q\\DRsǕt+6@$\JF[ D-[W2U4+MGS 4+l\-s.w\Jz> [N}XߢqW2Wczy`?2`0Uzԣv3?gެMy.>}Q,Xr]bb??gWk߃s@ƻ@L;gBp%e_ D-;T W+<` Jr1ޕEùJU:WquOQ?D0?At5Tn(&gjsǕT\ Pۂp%uJ>殝pjC܀j*xgl,W"Z_ T.RpjrǕtm>z\&+ \` T Tn(WD0q%nu%*]\2Jr rpp[IR>bڐ}y+riaLt`w58q[Ge`pJyF*<`caf33cFh Y?bˎ7~urY9nkZ]L?kdb>҄]4;;.t=50gl0ov?}*cFg_?\6]297nF ݨ?w/MFjyg^5];Dkj]Rze9!cc0ym]ہk)N6,o"ofַA<̽" }ifC]fv!a؉_ 6MovNN.;\O6MF@X,T/&jC=*jjS ͣoW"}qj9{\hqݬG4]ܝ^Y#X+ѯ~KŲ+ߗ ,uճhqg'O?V6;iߢD [b,辘d߻W>rg3vVet5y3a^v<;og9zy8Z/7'2Y}z1%g<zҊLd*:K-_ߣuKs]'I>K[]+JN7*gѠk> WI27gpb1{":[^n>]kT+mЎWl)68tZGb\5BƨEc)wWMUVW]5@vPRd)=bpr}b(0w\J56+ vsk`RX T-e+U5v d • \ͭJ:;T%%Y,W*sQՆq*WL,+Lq'mD-q*W\ " \`*'ծr#+-fΠ5|>-}Tv$7ᱣ7l-Fwng1nb#:N01=Nn-Qjþ0=NenNe9+ hRn1hǕ8ppIq%bpr+W}R*WZ9( W"ػP T.RpjsǕ'T\= :"*W"+ Um JTF[CF$JbprC1W2j?D\KJ@*;"U+U PqubK]`,(Rpj+UI!*$J_ T. ژ=D%j>[N}\ÛAqDȍZS!8!`0Uzԣ g%T2nf`G⻛S7@uѧ0ښ24)ؘ5WN{10!G&E5Y.ѨIf:Yw cR "ZPKI-Z>*kjS ΒTD3\\K;T%L!K4mA~RЄRJrޕDSC8pEyW";W T.RpjsǕ+WXT0]\.WDzw\JWCUFpȵƔ+UخR>V\ ȣ1 \`,ǻRJ>/;D8'ORǕܕ-g;T%U$Jc9[TnRpj9BUʙޙ F%#==R.W#2Svrf{\^[qG@݁cc`9yEX**8b1{.vO#Ufbi0Ё3J?zˣ qr]1^әTW++W*ʍ\ZobRV\ B,W"+m)m`rǕ̬ެipVA+w#R Rmp t !X[QsFEWjqE!+CS9ޕe,Wmld鉛+=+L;TeM$"Uc9ޕE[L]z;T%Tn9va݆fWkXpl`F&+ۜ7Wv(8W3A3zi㐟jyM7l=lF]=F)9W uQ)S &#=ovO_GD+`GlM-/_l? u^#C{q A?jDρn e<utAw~O/m6ےv)oڇA7^Ƿ70˴Y_ށϜ{<~hCXeSv~&O_Ɂ.~|ǷT?$]B\{u*E(ޮl>0>QU=<9UOo#e"O3p@siSǬ,-G,+k .G^@=>~pU[ŶY5|*irlICIچRՙ\Ʊʍ2;*vB O>:/w7437;Η_*O755鞨(M ={6^,;]eC6jewQjQ ƪ SH0Tj۸YO9W]*8^Uc? kGajj -OOBJƷ*BEo0&3YjI&R:7'ZZj  9Plb8ׂňdL#a;qzNS$ZLm<4Dž HXRfUDʛmL3mDȤT9b{ b>Y$3k mqsùk3d)KNI9_}{"f|Ճg{lubT=cJQ1vڰݳ)F3IWDISKQ/= !FVYN{'z4[_qCW@ghF#IulRi >BXH{qA1#3X2d]4b|j/I4W-wtM%%j(%%Qj1HGI+*ۯp߬#vnE"NؤТ-)$8: FO4)|NX^TBZZrPR<@l{(R.I 0̒QB4BFh եf iK@T`-AA[:*NBѬ A{ TLgJAQč _;`Pf֙5ʃ#-d~GB[G@PSѐzv]R e JuirgT0FrTSnkI34P`d,k"1ѷ{2{xWе?V|p.#hҴA0( ,fĥ*3fD'cEWeH;!pBDEfC|du\=qZsޜ^ :Z|o}&>MLc[oT^卍PLdg%Sh+J]e ,#hyCs ~h̬`BgąsA9AJP$rAV *.2Mڴ.N5k4/= KH`e Y%ueL< o 7<9JrAkП[D"jF{¶eBTD4X!xr[v}wu4P詌:#)V4XBh;K)G;=by1k&! %:]%@_>б&#tm[[Dhʠv?5Z%l-ѧ@hg.I+u rN1^!B ; U3ڄ`b#hl,z<43$2N@Gjcg? F#tf% ƉJL T "8tDY]õ3`y0 !SWI}j8FuflNX snK 3lC$5j@gV-ZPt[U #},e4 * y4LJ `j\U;J@4Xu`yruni_:^#љ$;DU n=Cn5'G`3FOKac`&N> ;m_0j0q5H<JVO ]z313@9$nz5l~YeFIJaADy آlk* FD>MYUf\Pzڠ`.1J[PJ;H]߼Uo4v+P!>Nw`E]("Lmb !᚛l ;XYO_˪ Bh zlr3}hFb1;$zށ U *uHuU c@sMCwVj E5kՃ*TJm\3i4t lF.d?9'O3P=z:D=NqVkA+wf*C-:mP fxqU9 W65EOhIBj4CSQ3&{|tH'p:q˱ %׆)XIAb0xA4T;nfkݬ1jʅUD$b9\PY,8Ip.Xb8l[mK95Tٌݿjyu[e}^|q ?losg^Nv7Wg/^Yh~1ƶO;6Wwm"OVb:S,Vrh-}ӬWL"RHxb ');Ss*'m+?2"HMu\` 'j6'4Aۜ2U#a'NBakͬ& I XƄ81O -dL=Q 6py@Xs*h~(U }tŅd+)m ]e1t;]eUW/q.42`Ӝyfic3Z3Q* <&ŮpucUޒ^]e Zzt%\PBUEWRM+)9|:]e]DRh$ʀ "g:h;]e\tJ+Z60hBUF[(붿GKW+T{X B҄~!u6=W@iSS,һ^ڣܚ"bM;f8oв#mrʍmjAӰMCKӏmzT  72Z!NW-]@bZ+.DWXSpCWUFZzt #6IL6ʃUiS*ժt(e-]@J4̀i ]!\MTS*etQ^"] ' {ZCkj(vTKW+EAt CS*tQJ +M) B7'Ԟ{~heR ^"]j^o3 fԝ2v!WDWM/RǮv:BK呎 %5[/+գ^͍kSRPCAVQn%hΉ { 6&j~nxe$kύh{YܤLVԄ&W# om䚟)&jx>nm֧aFrҦt6eV{5Ql?4%۱Wg7F% ҹXFZ4S7z]Siye #A#PXo}ʧD_"?s:@m?^1?쉮2iJe>N»&h/1zLpuw>#ɅYDz[ C+\^oʫn9̄߬xWJSao{sn61l3ƕ/ӊ*%?ӓt8r y)CY"R$]4$c(~:I&j%l + UU/i8d"˓%Z`?s˄ &N &dHfE9rO mm":N.ל vz9Ddrr=>V|/{GQ@§ryή[gI2y;M[ " w`*HB3Z[íG'.mItFͨR+Pv(⣡T"P9X@n&H]Ҟ-Eq΄bJ؊{pep[uWЏ4M[^c|9>A|LFO\'4IˣuLx8'tPa`GKP2zM4AYҋ*@_T1@CCjR t`82MU s,PYҫ28K/ɻ6j PPq /tZ/&q8 yon"@TYpʥ荰D<`B)jcAawxpVĹ2D&BU24CTW*= nKԧxP1t:Q!TBr[]K.,`?hGF3Plb%ReK"UMO8Ўz9F('I ± )Й"[t",o{fhk՚}W57=?oZWoѰĆq2 Fn׮J)_k;h"\D6&Mȓ0&di޴%%>D HHm`WNv4rf8L!IX}1xʨQ @'`mb1j)Cڻ:duM7A )R'R) i 6 *F!c(88A ]Sg!o5A I$-T ,0Z_T0(9o BIfxDAfeDiK6(8I:8XNn;Rq|إ2A (qIwH@Nq5!ڨ]$qoS_a9&Dy1cgV ~F\)wq^?C-L%9"rF&1qD&gZC3LFhuc0·7C.;e=YٻM*a;se\ܕ뮲lםd8e 4̦+urDvig8_}D%w$*&>O7/6~ӫ?Q?ן G` Lǁ QOS(:Yk&[f5< k,0Yew以u)ڞ_ecMo2}=b'O:G t+DMT.QPRF 8!+.\sǀk/8#]W[ro{h({R Ҥy@Y$T{t *b>&Ke8H:J8,9(ϗ:Fv龧qeSRXo"(sN:=s "eYd裴;LLt6zӂ ĵ`ypo#qkv.>cg=ez]ko[9+;M~`zll~ٞ`P$&䖔=Aae"݉KQEު*bt;E2Ӫ_0=Iq?:IgTNoܨ|tMZDy>dCz.|ONҹIA:emOewH>RL U:y2) >d9sLNyƍ"ZgMV .}~"}pKLqZxU \_dByMkByurmwu~zowi0i,ոm*kMVuzC-r-uT-ߧ*u׈Y7+׫޸*!NUeKٴ-swSz9WH`.峡^)gCRةWNw+?!%`iV^/xpTHs4'/jC׬Rޡ祖a4no~u1A-1vos-'; F!Yyϛ}ԦYm#&4Y3t(rO-ءcsHӐ1}J4no*,g֍7 j<2l7\O{Յ f٣ (Bve#Ť@!rQc0X@mA3<·.dccz||ǜhAxZV6GˬKyṭa /B0Y/J nZ*!#@Fb9M)X\lnx^ɚ8t 粪^V.똶 m=|G;ԀcAQM1ٿ3GsPH2H S9+ǔs)K.z0M"&sqdkj#c5qv^Tj/X;,<(<͙|.E^qz|3xO4ەJ 0Y9,Ƙ|4JrBT.Z#R.VJ&攫[O@:MC r4+4Y1P79bfѺʈ]Mb jWӎ}Q[VFm١vn)-))})We0lDZe06ZwKvFf'dU<,\QL!WEG&!Fpѐ㈥X"+ۅ:HA5ZxXMxبWcAj/"ʈ:Dqcu!(嚧RęYd8!K@s`ATED8c*gcLV\p&B[\BY ܒR/nu \3i싋2.;\ܤEA`4hncOZM *2}R H^$2!pXv YV1Y<Qi'~tj**Za)QYF! .@N;6Ʀ)O"#4*5J1jM(IN#Jn=td)+yfzT>ʣGVD#u혗Qz\ h3#sQZfCr5bxbqvwv* m(:/k 4ZvٳfVGf e=ɭ! [fY8vyGBH鞀m*j&~ '' ECFIojsܧrlLwN`fR{n|* ^׶rѳl펓4,Kd.rz6E\k !7I:vB"`!OH-\b=f:_VgsAF>6L= ua8(\j2o9S5y<㶵O{>o/5ui <{bymG2 >8fAe\iWNxRh2N)Av \+$2~40Tg]B#}gMeA^n1ybJA-6wS}DtA K3pa!FNޅuH,}F) K>{AV(st " xp4Hy^srx/e|֨S<ϔ-VE-#p2R15!O):~;O AH h 8^?1r<'oCXRl.( 9$\P012y^zE0T\ :gѯ1Ubp3M9E#^. 5,!+K5@ '/=l2ک )唒%c"d١R%h ^Tg!θXر49 F4hR>pZM,8SxgmwFpڟL.`%IhA35Ħ;05VYfm 6?9nܢ by9kǿxaӻaoN ?xS.'{g R9EN=Nw^fӄ ?~ny>ο+r}E}|A^XYjAYH٬;5p#+ըeͳiF>ffEg&f*0ї7~h~Swd)`/Bءv`݁|$gڨ{ 5BO5eD&10'-K"d.3&bD!vRl,G̟J*< uH<>;>,s$N+tEF熳.rK^ϛ٩{q\:[_/%-\|j2 kU݇{Z96S[pUG =zG(?GD)$%o ]E6|*$@WOx71B5ҩ)th{*d@WO$ѕ tQJv'HW $ CWW DW{gP~O49tĮ"Zv偮NWb͡-$Qj0j32HPV3t%tsºd_T3-<樝T7RFhAr2ۜ'{y6G=fg1CԣE5;{vc iB.pl MGR;MGh 4 UKCW{(=+2m]`뢫W !tQ Lebvqp%o ]ZUDIā ]qjR ֜%WD7Czt%\4D1tXW-{(+O@i3Kp5j ]8%zt!`׿6\G_ֻ^G730I\pV!].Dy ZYrm˃w4Q^IH'4en2t7ƽEZq/LaWl*Pք\`bis k CmU{ǟeh[!bݳFХЋ0٥G/v^cH,hK+~Ԏa'AFx'*d ˴WLkq4sT/oPUFY+{]*5qmzտvR{ChRf"veyoἧ^+;| v懝HqCF'-to|ng4N ꈅv!7FZJn?Gm7rtx zLR /򸩚)#|E3mu8ݗ iSQ";Jq52J)t"0grz.CgЎyi`89sH[2n ڒq AVu~; $<ڍ쮯vKs!n|%gSK 3<_Qr&s1p-*N[jw;!JAξ |ak^oXn2V>K6?w4[6l oe|tˏlvߡ6S{6McS56[h"tV*[%ܙɕ7߬w|?Y罚|Uxq:*n27g{m;ol@3e?Tak Gd҈,dլ6Azx諒9IJCMsҲӺ7VPk+Qk ǚ c8?R<{t&Kۺ&w.xFl42swԬoY+\oW}:sf"j2a&\e& g Ad,mݖK]zfYם(TloU/XeЋ#sXJ S2FJ-R{U{tZ 4d! 9٠'LnLS84_50! DWo#Juhۡ+V>ݧ #Tna$?t-Q g.Ij(J➍Ly`A56eք2Y-f``.~/<\f]'%l0]i4&ܘIwv\7`0ϗ#7h[-Q鴁)Д2:83ϳѧ`MOQ ),]=Ē޽yb.> ld 0 Sr:FvYp>k-2DHzQ^g]( ȍ"g2PZuApVL=n cGB7C+6CAhAB =L[l?9~}(K M?!./]"紎53OaxFivH,YFp/_fqdw-"VYHX (&63sywG9N o`|V?߿J461&;]t|#6\#s&-έ渃_EIs(.F5Zt7vn]9T+^Yov8 {w~oQNf1*P+>7=(t;~D\o(uמ`Y j@NQ`OSA{o^͋qNokPa9O!@H0-0i0H .dx`^sC BQ}|w3k"^ѹ!\0rHRIp #5`_kw;B23lA6F%(EI6;U$q_פ /5y97㠥AGV^wۨʱxk^ٴt;#D20t*3 Ù!!&X B4JpͭEp~\Zu Rb&y$Vi0Rʃp-1^#j֕ܯ++L쏦8&aJB3 J." "Mq15!{ W$.3 ]e)LbG֦€v`UDj͔*9)daLIj~ʭ* F'pPiTb  s"~oJ+?X$s,е҃ u LH-,T c|TݝLprCP+NmZo۵пz4OBx c蠺Mge2;7oϲvc2~Sٚ-ĸp7Zs@E)2+ ǃ$qi5XPxm2~(4vjrT;‘nW/e0e-]qt;8Ke)s {e6!$¼A GRY8Ѧ,EQDa).iD9z 3Č٢Ϛ* *3un9@h( =UԆA*:-yOdy6%=J;tL~,䳪Jdyaɫ:%1W]/il<5ʃF)6A# 1y `0^ xj)C /=}r7쮯,wW ˵&~XE:U?d` q!(!`XsmA7W;"E ± ʜ,̰BiƝ#KЈ`@޼s!ȥ51UՐTf5Ua9EO/3y׽uhQg3qk5rU]((0t$VA|~WiQǸcYlmKr wea^y;s9wo:Տӗ^>p?Y;q}*Z ]HKܪ'?]"YT5VO;z5u6yEK/'97[Qz<_uFe*Ib F5+g$ j~1MH~][*!3VŅG՘}>UN5OJo%.~RAaRSQyLʁ#8RX'$qۢ@a f8(}Ì>ǨWHmxŽ^yGե$8C^Ere\H̒^ >aaQO;blM&ڡi<m$a.=ϽnWR.% w9L9srZحLn`pЇlӏ39fL5S7oE<;[饍uV 6:f'mAۤݲOX3A0ɵ `#9oB%@=cF@+ ¯46]7fe`*īd8~4 > FxZ j1qj<ɉ)8p uٳˮD=ӓ;E=_wefN;iͶour^s+I7lm)Bo 8*a8 T:yɒW 1hV{K" DbN(s"2:#9X 'CVPRTid,&XqbXXL30 K33>]d}roT0M>m+! 1hE){@9ZB:$p b!e_+1”ݭ O!;{$8˰:% ٠vĄªc"^ҡ9َn2+.fWPvya=j v+dQpn &1O:*/Q{ B( k7)v++JIѣU r , KF9EfI}bڥDHV__)ݹ|Lͯ5Mq"'$Q:\G4dFds,9%D$DAo ?[r+d5ӡ=? Herbɀ~Jނ J@ rmHi SqrV1T3tHg9]D.i+YFL4UENvVLs_ HO ݰ$$j!1JΥhZ1pc pc 4-PӏVS-pH!: +2Y@)XiFPJ EcZŨz?n62G{7g' BFHLFFN.o0\Eg8G eӉgR(c=Xe;mMiw;+O5Gř6(i[!RJ@"K,xP ekQ|pXb#@wQ 6(b^dTQJ\:V)qvH|T+D_B]@ r&P`i$[K-nɱwǙC ; C8/ cծȼ'OF%H Z"**I0OyI(Rz .p#O@X_ x7.\IOΕ]FѩSiORq Jvy5SBD*ιXUYbZH3!tϣUW=]ZUzճ,kfF#]mϴiIʅsѯoQNEAfXGBL=Tz]e= S{ECU"E:Q+jnS ,g0 a 1ч&SG&ФK*b_ݫtu.Leo>3l-]]KUw%Ff(p[A&Bν~4()s/uϭq8dz-.l߮q4e-imwv~U흷q;r>L' -n ]t~z(̳|Kd\5z{qߗE]ov r!.}eTY{noHBbb͇i%iO=ȡ"J 8+KYkeuiq{YOn8F~'Oi ʯL8҄t)j^D9sbje DKzS6= L"&N rT*E0b:~kӧRb?`i 11i"ԄFe अhB$B! zD "d]Z2'`๱9P 4@=0磴D'՚ǣQr@H\rh0RLqr$0;GRD!p3ֽ8;rp;5#!vX nl}=}Dy 75Xs?jQ"ś8) T.`|+!P$D 2\uS N-\EOLB=kG kIlbG'v`x2rGW^[-mUM{N8/_ޝE"|*w>'ڄC,y7 us$ 2 -ԧ nx5җZ[EiSM~9K~*gT8:m?ڴ<\l_f59QW!&8kLB/ejgr~%Y5U^&޼XĦM+ōBeC->=VJёذc g$%AK1/BIs䫯cXe]tv DJS}drWRN7[/<ٽ7p 9bl2003{u!WCm_o]xʞtgEm=YlX3@;aֲ3lv߹FMJEy]|].Yqس%V(#9nx616zA(',iGE$p)A2JNnXmd@+)Eg/vNj-Bi6HֱēGΊhY *J\O5 z+zY}::U]koG+qݑQ(xo$X NkS"J~o!R"%>F"#pt:Uҡ-5?!( Z|P*g:8oQ^XCkF%/ZAJLEh a R]ܐ) SkP}:UA4HR*IV~h_!D1,e2vCi6pyA7GxŝW KA TYeL NBF8딒9` H# D=܋d#Oqd\\o;U"CQ^,jJZNKM01'7zIHLRn]kZu4n6:t_pc{ús\ا ?k? 4T%ĵy+@: !sL4D)gW{\:{6sԻ#2Zsx/OdQrvl9ƽXlemQ?Y*~z/A;nrr (Hƌ 8)IFU33f`m;@qh> 4 LBİϧ? 7[%3@܅ L&\zfQIW˿VE3L2K]8}ϫ7  wDIB@P&! ;OHu\gv̈$gvIJ>!l}R 29Fdte+4Ke"0eMLN2RAf)EI6hgP2:Ye4̂V"ݒM'h 5f\]u`({Ի'Y0 ~Fj.28"?a|0R *dm~ F~n{'e$$ x߻:OOuoޝrJ$ǻTz^øϫٕRoDtj O2xߛKx9iϸl݃nI  vᒉKvrf\7 ?z2&8IfZ͒Y9.0K W&0cjJ1Í_Çd3~?;?\Lqpwj"%z~>fׯO/.O?UP^'aWMfm 4Ts-izwor{qѯɕ \O?xhix9 ?̃J} j_n ޜ;`l]Is+Kuň, û,vw4cj7XbGًߖٿ_?TIӕ pJ+u]]KdURU}fL{|Ox>Փ0kO>yX8 sw޾ǷwO>pa?7=:7pM ' ¯0zwp}E[*ʛ--lP1Mz/nk]1|m?zI͗x&ftq9HUr;k*0+e>Fj>G.bM!l E2 L}`\jekb+$P' gc.K"qHY,7$!eϙ!pNV8hv8(Gϟtx-$n \ Ҹmv$S.8Ƒk}6v2т$JȺ8]\}jﶾĒmi!Z51 랤}Vws$”WccώG19׶w|[fV륣9=G)f2Li~d oKB:#t@wXyx2dxԃE2JGa֣EUa/jjy ? tg/#v Y2e%ɘM!IJ3 146$A.yf3q+\ /f6FXݏ%,僓󼝰~mHbꃩꆞm k]>_U>Xg՗aWyɞpҥ!;E7o^E)Χ:tu>8iPfث8{\I!b0j|r`̔bߕ֡x)KqBJWՀ~mkm"GqF1$|d%blt>*ט6g=$b7ݖDN?N?N|HB%ZG+ {N$b"_:z!S D 'DPd״q"UTG-DVWP%D6N0Ej3q6Tz]#K9Rwt}ճ$i4ӓ{%i\$͔>rK C۷:yKčHʛS1Sgj>-8pTޣhJVNfSs4ĘPŢk|ɹ(j΂EKCn\BT)C)le'8;1HFfWi4XcXx)Q=.`}wb'9dqr7c2bCM5FL;))d yrVg&kef4)**ljαaIW!¾P d+HZnĎip1ʹ㾨Q{d;!JPU(M.\2I7k1Sg5P-8]@xX;a`a4E'M&HN8R-&v"SM16nRQ ZE?jQGA<;߾tB6<%uJl}a(x|KfC ,EGA.hM ;Ĝ1jɊ/1X$xClG8j($>>UC+I٣=UjLҳѠZ2fh3qtb RФ9/fno@'ӡ4KGuQ:鼐 Q+#Kep J4#jI& 7/GmfiZ|{<1Xnfޕ0x5YRH#cG Nۮ( j4y!YI^bjF>RRłPF:I 5gO;\#V|ᡴ6IpYVF7V@dX )c hDV6ZoEW@N?/d"+WXYBDCdj3LєV3'4I7:| 5,%jrvq)<-$-:ȋD1b8קN|#}}SMWjR4 -IwlpJҺXf&ʘcM|ݎV;C7z89JTeC 5cHR4cE>C6o!&x%42Ԁ/|2G;$cfn!hl=n_ԅ5d{0!:&HM)`jR4KQ aJ*CoRfl6KQ.0;Xutbߞ;Ahɘc]9`ɪ+>}ɽbC {/QB`YA'T3IP IHZ!%%#be&f"(W)s2Fɒ*d[Wj%EoҐzYk>ߠV2YsJԭ*|Ćy];;`^F =3`<;ʧzʗU v[w1tI +Tg]t @1}Yv$IF.4 ټ<~ccc?гoyN* 2zN\JGJ8JKcyJvλsP-5utƇ":u ̥S",9)cȣ u"]~Ҫ5eɲvHnvm2?1ҵ؎L둘cԁ~t\ t"1S]wBЅ$ԁ{aapݽĀ{At"AXj (@eĢ`)$ӓ2lEBJCcwݽk0uF׭Yjc*J] SEm vgXoaRoln9z|9;&swΫO#;.k.g3_ {U3oܭl|룻c¯y߫5\M~l-"qw_C }u(՘tq(翚Ba*g!@F/L^HrPZ 0ڀ(e.&UVLt T)bXLb"P4"x/D%Yb~7ETDPAVS+gQE飴Xz\FYf쑛svoʫÂ|zNuzwonW~`IM?l9n"s)If81 ]u"S̠C v9ZE;xl~&гzF֐dxCI kL ,15Ծ'ޛduBV.Fiȳ4?;\O|A&Ć^]n:Ze[yuꧏV*Q; ԫݧ8vׅ/SUqL b+nV3)VTo:tFsX#-? DϭJs\lv7J^7G.<}Wp{Łe,PX j{GэsT|k8~X~jEZeѬS?\rmݺ3wVb& "Ob|_;mK6*Dq;0+^9^x`;=l\JU|&V'ZE!tE0x]x=4_ ,|{?5xϰ;t0;dC,!Kb1BJ'E2^[Hҳ q&W50ݠ:u6Ӎ{C^P-/dhud]Ъnd~=ljq˷v-V#m`m#IЗ0VTwU/_[$CЯneJhREM5<0,sF͚zN ԏnSκd|жsNX<$12'gr9ؓQՃ_J55_}yƮbh{2U.,q/޼Wڲ~z%,$TP&Jfx3*i_BNt)Ŵ͙䅻=djRChG <$[709>vJ~KxFim\ sGdUG)@5˟4{ǯ;*W8WRٲuzf0BrzKHP\2"QhMƤ(QI d@M tl lڐ2,I#!xb89d2d|/Kr"Aft0˼rX)ۨ35L|o!ۨhW%z6+] >sVs&Ec=2p' *zж ѡ~Q !s* yfBXHYaJ(Ek1ޥ9 uϵ `$LTM-y˹IM6:t81X,CKCN{A3~\הL )${}v ]2`2Z{_FU)x_]=cSS39mԴ?߼[8C_ecyf\{9\󅹍t/C\ ,CGR.h;+g3i4ذs+CQsߊK03W uix5b Cxʊ<I3}X3ڤfsuH5s.ڮNN~_|("hdqns!$뱮U3,dDb̈́yeu2o]Wӳ-ݫ<]qw +وbUl#y&~9Y ٞ_ͪt mmByۀ$4"x*(b,Txauր%fBkdr)0HWNs-P,dbQ! acP53a`|v^BoAuRӁ ?'<1Zl6[d`E狪P޶h#+, }c\йT+D(Fei t[iVsj3tj%Q;Bek4Qx2N8`>ѿu{skJs\3Y8t:cUR>H?.&{.B$~IU;;o?#ԋ4,&_f&o9&_]8Yy 1d坰vNlDMrKwFRh?>?N.k\_gϪTr2wLM&ow? V7!-fn˳io~|3e:V_췫[mm-Z"Nvl0?גJXYE_>[?m 7t_7l^_{*םS뢆x%`dod*)EM)z֏#&K#:PnU]dsOjn80xnka}'TKj$" W:(.]M+nl^,k@[  e0 K>EuL<]3udãM6)h qTmH1` sL KYB&N)*ޢbuf<tSHj@$SoZǗY3 Vs6+-vDo2@'jRdzd2g5Vϫ^RjdNGT)Kz)ԞجϪ5JcL e9g{Ytd%"B69$[<[53g -ZCoe?0|b_Ě0}l)} 7>+$78gBGVX GCGܟM/nAML]_]I.,@94E")̣@9^߼]ܗ4]]qȾyľ .%l}44 m[x&$5nZZQ?_]fp}ByR1j֧y:yuz?~dr&yw}}cۣ?ߦO/?M@U ۶cv*}h+$qLtEzKLΩtTht `r/Fru^BV١K.T+Wi(H.D30@6,Ϻ'Ztk`]:tO{[ڷёrY`IH $nk1~aqu3C * ߮آ$iA,.Ȉ!8o]IpH*bTq;v,w1KIfPT(jfҁ")h$UٚHP rN)zr *"%E꒤/׫6US}_tvu&tz^8e)M}e˛ۧe{px,몱֯cVfz1 ?լsku,-ȝ\s_] v]Gnް'_={xӛ]pλgv|fu{_, 1ʙ ok/eR>0BmqmKTt]k^ouQvwSֈ;6?<;+mͭ#ss_-MW{݀R1:kCg%^7?)ۇ4 N쀑9)|!֮ʘ] H+ .H[S-){HLo E/;>FId$%;Hz/rREF.ڧ흐ؖOmW )l#M!?YJ٫(cAJW6N٣yolaXםOja!-[6xӬK<*IDn8Ȟxr9's䵐mI-aI Dh8H J>'*d%'֙9gvXM+84ƹPۜه}&.o7 tq~3{T@Z)1уvѲѪT*J)bM3&]oC%{>)IAiUӦL-M\N+ \gKED9gp9M+s(Ywu㬭Ǭ="]_+K*/QQ.m2zE ь\e)Ma1} d$0Cfd(Yt9פ,UYQ 9Gu$Tg隙p>I|}}rMCɈЌ3"q̈<*׹T1@idr2a&:9Bј=cAjB[hGRܘX:!RLH!dD x=JLF)3b3sΈf~ԁmҐ:yɡy6΋f̋c^ܹ ;W9Za;I&.:TQP VBtSH)dFE>q1/#/%6w௪qWXLՏq/H OJ+Kj3tɚj%(YJXR@/?NWZ/NO)hAkP]*H7}B918k̈́ t>GЎW/;~=Ș22 5whkb,J31+ȝF؉F(ܰq:3b &_;HX>$ %X˘V*KbT6ey,u@ 5hS Qyn=*l?Y1F}|X;G.Gz=!K>mPctZNl?YOQ-\^u5|㊇8YM>B/R\NBAV"N5#E:ӎgS=kUOy5͓}eaX& HרS<{`J}֪e@Nf:ӊ):A{Œ_Z=QAK5/dmKCE>}2x fdeM,.Nۓ[z'kiFf7?01Q_d1nNdKRJp3S@S$W=8˴*F)$ l'5`DűEJM/6p"i>v c N))\46+4.)*:L$@^I7b`ǂfXgF5jR9J(4=KW3 g"φxj5t3xt5ĪXy`SMYV0J}~ޯ]7Fx:z+?.'/x4Ѡ˛2':nQzzJX}py|p{Gq/LDGdFҸ߫է7 q>gCu;b"^+L&h9_8aOw8^Y\2^ 5MYeMSc|:'KdG_oצ"VCgs*]*% Nݪ&@~GbHB>6*T~tZ9$lQ{.vuȯ#㗥ݬ\8;8x>]"d5C t>iKm1vbڵ":fqUc0oY;r*qS⥝qøî/]- ϟ|o_1m U B/M BDmuHّ兞}_\j(@7.AF׭5Hl{oh`>s+{ Ν]N~68^*愉 Aȍ:pBxIXEI>Ed0C_՚+g=݃M@ܸCma^H µa U=>Z7P S?[#'UIRѦ9#*ܢTL23y_}~%%};ݟWR[Ѳuv<n,:|xyI̴B&DY9yŝQIխ:ؙAj'SB(9ʣ#H9$P"}sn96Bl 6 %FV~R'Z{ά]PrANɡzqf# )0#'xi9; ,+mP)KV`x(QInIXDT)h ƌ.s&H]t.PiY@Z#KI)额쏰02=w:9KuR&;% N֑"5 K@&BX7XNHŧQXoYāĔLb´|Jdy !jeR"k+^ xU'^`@k UiV/.GKCf ptrij1 YR:+e%e,'c&ɘ e#i0f]}o>S zL3oiEZ!Vx5=٨JLS{|\i!fl4ƜCd ]*Q1I~ysCc K2-' Z,H$ʁa9KJ;h4bxc_|}S>pE{Gc ۤyٹ5[JBBm+`*򈴐BJ j]rk ;]["nEw XZ|"#Їe'5I:2E$HdYdz& -,]%I@\"g.d SF4(xۆd[}v^Xf_ѦeND#aY>R \pV)UPPG`11&|6h^?nqdib$a9 ! `$W8[Q IHUF֞/ǙcN0‰bnu'Ԭ w2axi!` PMQ(p2`]}s /߆4!82W{|g.aIjX"'4M{?d8&[n|?Ç2806ya>R\2_{yƱUINz‡* 蜔FLJ:@q0z.\2qxO)N!N&n'SRiU36 Kr10<&kui5Z3 7Ђu}497'c= C]bHU}=ԜDhPEZE+g%0l4.9?緦>q3B)Us';ͦ}T?F~~5,gI삘 |gpKb΅u7g8\, _+]IWRz.]ٻ8 WJ̌v $?xg<] aByBjFVEӘDuuEdVF_bKS(ҋe&M:z8;^8 vI#I]"vf$ò7Ml4|~!\L`!kj\PCca*h8Wo^߻/wvv߼Y'hb\ڥUkY.cݭEm&y-D[f~\&/{4m/LƀX;_5 oQ[f&EŽ*NBjU?Z 4|U8 H_U+1Σ;Iasg'eJ<[LY7B\$!PH%+B6E3C9pe|':0qyuc=1.G [+' h"{1Ѹ+m,kVh[ie]gNώץ@kcbݪ ܣ}Ww~(m*fH^N.K:73x8lWp`}\'<=O56~/m5W iD)D1@H@F$=Ft\,#(dh |2苠9 gL]]|R5 xyAb SL_hN Uuɍ^ѻ|/4E[\[xe'whu};CSZ}u{,P|^&f98*7Pm3| )?j^HI8mcovp{j 8%rѫ&2G=pQo=Pl-( !D *`R9(c\(eDG0h F*Af:v<_{9N$x.օw՜a3la4G2 #ALcjiqѺɷ >Kn `NLԊ\C䈙.B| )g~bp\#)gf B YNDpǴF`t*|YL0r7` h'#&s/$a[Q9 SΠ,E#s85unƙcNT܈4|\{.}|}/§4D7b. UzPtx*(*D LA0OSVu/ | :=ÂN֮sWEa0O]LX{wUTsW?v^4 `VzpΧ6x(Il ƒZn3Y_HI9kN0$Jo2Nzh}v#&MQvwS~T7}Gm%l([^X9 ܮ~=kljvX^vJ)C͔u7F.bZ{3-I\*Jc5f.5 F?F?~ٞ;<=9l#U~s 6 /6\kT.lG4c b$r9%s)tQvx #ϻhwu]OnФ}~uaNQʝ:tO./K+FT  }t(yX fŭweH 1Y( vQ/3KI;Za3GL#o8<%|CXZ_1w?wVqG Ή{ͅO6]AΕ`9rɦ_\~lc>W8Em6cvezRSk8>Xr胶*zE%UKVѦW['*z=^v*zٮo"3ă̓i0 y%r\Cx#s:[1?WkM)CA/'d;MH$; .*@Zł'-&j ړ)~-%B{&CZ (t++5u4.P>" xkdXnYP&_)_sNYSKbN9c>RZ>o!=V2q_3Pdpጼ uv!v$GRɹ[>(/5 <5FAl<Дhv3)l[ 1*2< X8Fk% 0 sR Rd m3͘|` =C竱>}P6ގfB4uΣ]W?G+'불fn (%L1Xz%WM3; D'`AcsgK15MԾu冤@ Zbg v#7L nO֎RPA QMکV;/j7:vQ2Q *UG5G 1GR(\CȐiW$!$a* [u|_B널I6<+M1̬\}tg:4s$AG>lp@8! ףKG~ &uyhP+rc.{, z &uUYA=:_ &NJ@ZSl|2E"A2ḱu\D ,˚g2dIgYiˊ"O ye 4!\a<+e2DRm ٖ:7px6,Ia/ , BА(BP'iDR1&$F)vheYXJP^,dCo2WlƨdTs2%F7cFDrR6I ą.B@f2bA&!;n l,.1ٹ"Eu$o4v'hytd\j4!cA 9AȐi s*Т0Ubgckdck~tot^J:dP|tS""' ꦴ[50m]wfߢ%tU#z{eqh1-BR֐j/X;XKnH\@Tq Y`%LΆdjy&c3>Zc(T0bi1:?jkI3t!c䐳gY!DH1pZ 0.Ę myti6m_L`&aMjAaUWNe@-ٜē#CHFbVBriGړP7(#sXo3]J *Cˏ& @B!hQf"3`7W4;u[\{yz'$&g-lt2 IνrI@gH}bg^ ޣQ1wVZ9B2&pOh}k?=ӛS5SOi6Keh+;cu/K&`?V/M&tȓT?q(7ce?˩'M'F?/g]dcEQTT~<i4$p& J{ᒉco\M7#msxJNh3b=l KN=w]bh"jT:2RtT|ہ)V/MB}SVƑ{zOuzt5&a9vnuxyvz03.+ɼT#χQJ\ZYqןvaP-BIJ_-hX̜^3?XvcE=XzIGg2{GbT:i"ZΌdX'NِW`t~kj\PCca*h8Wo^߻/wvv߼Y'hb\ڥUkY.cݭEm&y-D[f~\&/{kMƀX;_-f fM{twݏU|&3?U?Z 4|U8 H2Zw!,V8c\#GwăXIY*:Sv"!D.(̀[ mfބ/]/wwoz?ŋ!9t(+Iݢ W-ش-;˥3gȩDQR2 ?V..:xi3?GΩ &;dWDhc "DL*vQ:Mh~z+ĮUʉ>ow^FŠC/Ӫ!i" =X6OdD:bPp꼉<ћAL)Tʎ$]41tEssIwؤlQ6thKu8ybW]b O +܏;T kX 9Rh$2db.E(:{BZ{dk_\bRxyZ+l&ܣ|Ѥy͹]"𜐩 ._7>T/?'_8Mw #z{"Y `]r6 -XZ%xQwy4ub̿5w|w|9\BUnIUmڷDTOlsdz7pJчZf'L2Fhǁ*]:42 F9P* p*C؉ U4L 9 DՌ%[`B_ضϷÞл>UQ]mN7?ƨNqR>BCV;(QL0Z=vӳLMRZ,=z[yf*@,lxL2P{S5=@U/ ֺ"1_}"jB0}A~ü`O8x*oNWQ<Υ:u.X ( -qbv.G;^NWe L)*f[)}:'e)(hAHLpU-{M(/|q u c b+R,ľbEefl|#duwDlK"__χD'ِZG# {NKEttLE&/Y! $;\ 1ͲO \Pul%$2kld U 8v md9ScCm&JGӾ,+Rvx}Փ]zn'wzQ#7fgߜa[ Cwsiay.&]ry_>Tή9\j7E-hP T2$[t&{j91&A"ZHb6EF]ɹ(j>-Cr QeKd+[)a߉A526g#͸J3,lwB 倅ձO+3~sH}d>gg,Vz_4p4|?b##{ɔRhR*t`FLdʥZ9'v߆aVLҪ¦K EA $*"9:48#v󚋹+L;ں1j{+Xj%_# )rfF3; xW)|7Cff(1d*R>Y&T3\hHT֥s;N}1=ǻ㮈Dƽ>(@Młp\> 0)8+j [B,2W6l*mj:`ߘX'uvJřc@tYf61Ca3_DZ*7⮓W%\jT(v6#6Z[^D*f8 {$\,18Xn3LY`go&ےnD֠Tv%>,#H*V3]EAV6ɊLC ~wBȹ MIAkY3q ~YoPT>+9I۬ + r`D DM6Y1 {l "ȏhi_/z ~bN:d"+W&2DCdj' LєV3O+)wuY%JS?&搼‘7D1E $f-1t)Y$P)Ɣ<ڙ9۩4[AOgubM8MIyŧz"QPV C|BSJo茏}ɐ:cADvhtx6. ho_ )S`jS]~^(YR/VoA'TeE)I#iI;iRbt52"V,}y6ʢ#9P)> os0Q YjZEV+KM *9i}򏏼`Yռ\in[xHi][k4Z͓9Ǝ1ʟEkbk dԂ$ h"Ձ @WL13QsK4` ϡ惱v7 85_F;WrmDnJJDJ*/*KYo?]yS&\}.Wz^-64&߽zG~·/}L%~7axXsN~}{OZҮt 9E/%STb..s*'d:G@:e?GWKq۫`7yڸw .tqy_w'Y#wZт EtZ̥S",9)7}/Vίz^`q Nެvxo>cYEzG:E8/D 2NڐDz^b{sսU;Q݋ ӥdUf֫X  \/)`>8u>d$Pݫuu]*3u>Gflq9r޾v.QY- v'X]^?kOrq7ܺknvgtzz7;CnqUZƷw\ޢ絖7t2ٛc/ƼoUy.;i2ue7jۆ_*rcƈpk܏|smK`yfokoq3rԧzA;R'@M]*sfԿ:rq{YOm8FN>{~e"bQsq9rd0 Hk"**Q-slGRo}eT*E5IWxe8[LL#d:viBmbe,mXMX6>Wnf'0yھyGڦϥW?bi 1枅2:8!"`^Xc H>#P Puyג->}H*՗H@a,pP&T1 S"jrVJ1h A6 22*Ȑʰqje *(]We$^͟uo`h&[Ic\|͛Ky~1OX nqzfrn?w<"$]b,O%Dm$X)fC Vt09E[t!гzyiSb2'Y37`bYbയuOdtBV.i;pus$MF?=ܯ'XY_{UگCb ǣ{64L>ϳeϟdu 4=kRVҽOghhfRus~vn2`w8]O)Q~~b͑p8Go~*kv/ho x_xt_ ~xqXwm#_!e}T_!ɾ 0a0cKdgI%G([i ERfUuuLtG8{ο߄q5S.I, 9[>*w>'ZCL' K]߿낋\d691Eթ?QoҟiH)#96ɉ|f6.Y,wT8U8[s}q_Um^f%*7!&{{uS4\pdWHy1p=-gbN) 1|5I_Pς]~=^z8ǝm@YY_{Fgca>YJ}u!=K9! 2~+*>qm^t7j..A>佌~$V !ͤ/mMϥdnDZT1pU% -W~[5r#֮*NvsVd/eOﺗgF ܂C?OUe_ y/\LޤF~K~հy-pV˝bMpU%<[j\gk(jBh9YmjmMT jyb$Kk{#3^+.CC E;1My,x3%sxu);Ƙ[l.1IM$so*FfxMX?`MἲLǜѼӮ{ߚh97u7+7Akwq3A^P8#BX#+x4Kpk!#= ܄/p"^$ 1R5IdK]{h5بbUYnQA]pur TBk V4rxeux9XHdIn$]iGE@p]|XGɍVlG&7K4Z9ZpZa.(fH'ŝ $sQǢRw½G؟ eY/nVn?mklT61\Sp)Ay:%3!2n"gڵP[]uYXu$|3a~AVER9S-v} ]5B<ߔXT&1J,8(5$'j"ntHɑ-=Tݸբ(w@` u ]s'֣*cܻ+ρv}.mqm$ arC21)"<6W$6p/3-./${ASv} < 5i5P|#}ok|2s8]]er<u5{JMIuE|0JGLxfu8rx^u8jwmkRW~+uE;uT?`_jUV]! +Os !|-8͹!SbT/(&|G(YS=R%}{y+ L}{ӻ3+v/ƶԽqEA6غG?{7c?Wqz*U*k^)pɃ0EU^fbM,E0ٻK( S) ʛ*=b^KU$<) VUFpקj1jE~̷7o^s>ZF OL3eb9̩zʊ)/Ӈl;O5vJ^kk1b+zvuSջ+?V8vlʝe)9kdilHeo)xaSrȎt-xSS onÛkEfMr|r-Pq-dr9k!S+a] JM; t-0HQWH$ r*S SW/P]qf`L8uU{o]e*5t+?iƶ NF]erL2^h^ .cgUdSQOV)ofE0>^<{~y@/ 0_VuҗWJ z|1;^m1Ylc4WW*|tW\Y7ms8wh%ۿaj1iIXeUv:?R:YR*MI|"0Kbu/֭=5r#^g-XQp@fL᠘L.C1 2RYT3 ^Y׭Sh *U'ʚ.6Bɹ0%K%+upਲ +:@OxҷӚLOyhY]g&676оvɤHb&`L#ш$y5Mbh eIV}< 2F iY$r2D&LU9(,b_f%6ݛ_SQ1m0S%n.u܊"ӓV4_>bW-c'^"ExYHG9笭' D(@Z/Qp \VD8Eo:B!%Ɛ Di"Ak/|)e9"qTu^T2i`Jԝ !2"t4(\hh',яq^DhYImrDLI@4w.N%θs @zhԙ"O w?cǒd%dKÕŁ`HCE#=1'9xts1 RK3[JJP;p]%u=BbNmIds[Ϭ& w(pYh$98t!-&D#җu[_:qaA !6-֖О*Yqg_?yNO04x x炴*LVD42hԊ$#KR'oc} MWMhep,b.o;|LF ٛ73.-T,ּK\c%IIOljSԨF?AX > rd83 y} } G*"@`Q0UÈ Azb I2& m2ZĜSQ atW 퓅ዔSzp9s[)߮rtq/A o<$D]_.ЀtxS86ZZ%"hͭ/ATH4FHQљAiiE~G6M(tQtFo!ĥ3r y)0sT4I% 1AxHh %T^pk7_&dsZn<>6Dܟv5/[/f -6򡴷75kmㅺx:hd*@9+pkD'. 8(00SͨR[`oHN;kd4J,":Ś"uI{ST),^rB<-Kpk|YY_*(nz`GZh;-|Wt%r&I|(IO[3.&B@@dhŅd79j2v6D;ʟۈg' @# \ Ag%I%#*)XPH*'D-$OW9q`$Xk"e RѶ ɶ!nI;\_$<CB81-2|D+98UVKaS.Eo4. &H')#)>V:I|:Q9OBtG4 a('FIAcP6T'ҭcN3tCDVdX 6վfоtsΊ@Ujr> Ip  ^*ba PHdldloex\bmewΠL@Qh캟x*EbLJǢ+SZ0D0E4"lWPJz=*9V 0mL06h'a!M(u$ҜDP]J$pǁPH@BWӡETvW{u|,8'"1Fr$qcA)F)"1j)Cڻ:5lmwqI\ϔL)G)*垆6 *ãHEA, $k=q1]uº#j5A 7F$n:uй nzcT8獡^( hg7Q 0))7DpCR62 RI,LB0Jg3̩)&D˵J c @"&dbYb45ͧ_b/O= =82͕*hn{Wv10Ί~˞xS ~ G5I~r|Q9o'0.~^w5LT>sE{IM uAFsuv8@v|.v4$GJrN?qvW7)s$rVQ FbdRD鸜\!L qAz _g.k]CNjEI[+.q )UqbaNE}yvMC}ۛqv⥦\O~]\klm$jo"_^tqb j$!WbH׫aû3ˏ(!̫wAddz1{׽&gQyx$WjyؙsoHX s0܎Q"{-u@=VTM%zt{KdǿݏN]~7G-|PW7zrta+u7*mƩ׏6Ȇ6i/An{!/\=J\\A%)Xtk౹(w"L򩻼wH8z݄Tn|,&TN:h-GWѥR闣N^>}ficߧOϮN7P.C5{Ke{nY܋%^'xB;y4}?w 6ٯOwIsKqoTz.UeKUgja1i:Ӌ{'}}iQ3d8mpI``q/J_^*C9 j 1fbDb4v3ַa`_>?o{}HIe%w9WW^;a?RZ/_ 1b1wZ)_߮ڈ#˭2Gg0mZhMhpq3|!g|p; g6S2e%3'} 7=+L4KŚȾ~\IݚiNZR@! *>K$J*ї 1`9L8s)sq7g%r]kϽˏ.W"/~)Ww_R곡ꚞmw-j13^}JƸȳۉ^1ŲDOdʧǫUq%p>֘r c^ǩN$0B"8ןwp) z@˞J*T5,I#V)xbL pmg^kUy,N KBp䘒bx5p@liQƧ@Ɇu8;_xeǂӫӫ"s Wb7M0ĸH%2ĥ6lcF1qb,cW4$G$WoF!WLj$$1#pln,z>Ӹȸ'{ҸCϨ#wdgqr)@uªشtk[yŵj?CL|hl\.0^.urNX'4SGg@s1 R2=80멹/Qilpklj^:ўq8;Ua/}!̾E}Dm:2Hˋ/_U҇;yONήN_1=6֗Z(gr-RD'ѪG):*%S1,߁VA,?EEA!ya].19`Z !"a՗(qfvD9p5}ВIQ*=ƒ(r'(Hm@oZ{X̻YkWqޥ=.ZO_FoWC#ILL 2MƤ$1H6fBag#rp#4Qc#֊I~جV|ͥljQ&X5s1{zJM0xfivˣWYVGE5 ݟ/~Q1onnԻ'{~kw^ly=O?ڛꝓy{>ֽy"+;7/$6L?m-I/#ۭ~ o&l̗ۓ_?itqyv+(8 ԕǵ޴ i}08]xk̪nwx3;+qMrf]vaf7GUGꡭb+{~nTӳvquM;ݲSq!5dOlxq[Qgłͺsphܪ4Wb$I o# ѡ^eY; jj:Ҿ{Ҟ׏JSՏXǧq-.`[l@ ErKr#$;[]Gd]֏b*2xIJ(jLQTrti%x5ffmfŘ 7.eZI0c6 KZqEA{}ɒjd( $)OTA݆o䨟h1SmEh!Cƨ-"P`N$S:֘ZF)`/5\[WfBT 3DF݈#?˯'1׻M6L3읶ZjyR/)))P$O2R!~sЭ*[bsVJz9mӜi!k)Z?Cm-Y7( NXFs6ӺNV'}mZշZ6q[?RXRЯS ꨭ)I%HƤsrPB`C* )QX'΋MȐњfB5:bsT$QˍѩL(FaB퀡= Zߢ:;XA)xRuv) ۢ"XIˤWPVhL1),Rҹ(3t[*BQ0NY1 M?j킫{b UV$S-^Q)ޕ2V'mqAnDUmf6 C* kbu`q`Vi>;B+*ZPL.HZ-p1m3pv> -/5#@p1!l W$X+BiΏ ,Ѵ8{ ":t`2Mۀ8ڂv NdAj}V''4+JXԶ ))e%)( dhƒE6 /O‚.&jH ^{@# vm^_b8AbC8*f7K5ʾEh!Kr ю2wP 8@tҫT%р is^Ouиchdr[҈y!@).63)H8KmF|e@ARQbAD1\ɂp0",0 E* A0ևg[.;mtҕriЀ,Fym*gUn2+EXYG opf&= ykƯI'y*C0 \n B`⸰Askի%NyW>1j\k5-2 b-骅 $8z`2o6:7۵(8w`)jY$J]!58EMypY5Xp4v@Ƹ@4 | vQ",TPs1652&pBK6\#rRKhLT`=@T*|=j C{=kL0ˠ~ aQp=+NF"5R&Xndj'1֤^r`qQ3byPdҧD2)I12WH?? E|5o:k Pٗ9kHPX?̌pC?*=g2\O#8%0h(ɌS:`"q˖sgI%d~ lMCe~\XfR 9$U~&" RLY"D` ЅKNp9X D؄p^Q銄 S D8Øa`j?^-E]rXO7Z*ޅRI%\m9 L]i'ߗAu)Yrk8Sk?a^ҐufhvK,[SGl,:.Wi;.qt|^ UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%2ȜWxc7`f Nk@BL( Ca>8L~0(#t@hja~y:Z˃5~>$U$O?H6MG .brZ&S E &9pdsE m^|wJ4Tqddp x~9nhW`CC!:s~z]F/4=|db 4uz_d8&x36Z[haYsmS;nlM)v1h桻4m w4#|DHi>"G4#|DHi>"G4#|DHi>"G4#|DHi>"G4#|DHi>"G4#|DHi>l>ⰏѝE ҽ?]3P!"؁) E3af"=3Ú{مe{pcƔATS2"g#G%a11쀺pz}ꬎz;_(Vޘ7{x!A0+&x=Dhje1U5% Y>̘QIM;lNJ-ϱNJq uxPS mtM R[\.{oZɷ]8%բ{aeXki8Uڅ(M|T>uL}_͌6־ڮo&ygmija~y:Z`4MiٞxORO;8܎+FKʟm t{,:Mݛi7ݦ]RmmBe/zN\su)jFT2} .OǷ#ɑkgFރY~eoy#> ԈQ#>jG5F|ԈQ#>jG5F|ԈQ#>jG5F|ԈQ#>jG5F|ԈQ#>jG5F|ԈQ#>jGm153 430q4 ڶ=fr53|^ 7J J J J J J J J J J J J J J J J J J J J J J JS ejz+^~uRMn% _~N:~"1.!—x qGSi?%H=.=)GWM#lZ},pդ}ۯ+H)"nJ֋H!+[ v{ry*wśO['+]0:cD&ggbv\2;ápR2v7~u{m|rK`~0`Zӂ`{$;"j{q4pq9WMZupդ4W$ܛ#+,&Tpq8vդI3+Ō 'İ \5i8t`W4Z9%îOa?i?tV(gW iu*hઉk&l!W,7V#+M\!&WN0kUG'k:\5)'zpR옞 69jj<\yc%:v%vS=\#cV=V~Rۻ{$z7f 1cD_VGnjr~_(} {;_ /%d]ۯ,|-0P7ݛ<o<^mK%Vt5@l0Uwyq~VJ>uo.f^ s<+ No9 Jؽo͛0s]xo̻ry°XYlo2 .sW1/fsJ76t/>[S]EtJbTAJ|M[ &M~\e]jɻwmm$IW>vIyE.iljqLj=Od"J]%J,eV2TeQQaG.$Qw>\βZe+煮$zY@)dEéY?Oz- fV !|%j٘1| /M-`3W뮤 ZTmO- 9P\6Zt==񇡔3+!]'|\쌺"tUP* JJ!]oQ]\u Z`4}0xtNT]:CW]+UmR`OWgHWZ1kY쎺*pug誠Wi9ҕQ)!"):.p9v SvJcz:CʠR*DwWv&wUv*(Wl+\.V;juU֧ Jۡ+K_)k+]P1sl% WneN_ov` 0O*B)xhZ@Ӳc4J:DWX *p%t .{9]+otu>tō`\t or\`+tU h;]+zOWCWCt%et.h"[ҕ$oR]+lN0Jw Z+NWru 䞮·Uu[ԛ6EWL0~%9_NWJ[.;DWUWrmcOWgHW.0vG]\ +tUО02xteQ %uEMU{Ck[OWru| Za;DW;w&,hesWD]=^ a ܠdEA]p)nC{Ž++{t]|ڸwsb$~.q(kُr9ht?ȭޠ4Up|gժLFRXBXl?9M͢wݻwUM'XH:^[1OtE+TyNc`J/ߗ{A{>R1 25GݪD492  K~pgn:xd3~8?o4TBš _ČZ>~gˏ_iy>Vxu;|Oe1. mv纯joc?mSU229@(mv\,qT)Dy4KrioX+2 sis<8 ivHULWpW=KV:q\kK43fnfcptr2,F4Dw(*)=1be\f_32u+Q&9E/5^ bd4-G~ Wooi`/3ܫ|,"> f Z+?uݭ:ךnhf'd1j{}z|a}ϓiXq0ޣOuWYxGwӺOko BR>s4]^?%!`uE/YtfsWn|5wZVԲ[n_xwl=nhKXchI6'9|G(l9v_uɧ2/n1Y-n=Ł&)P5z3!ʆ *jEQ\5fnosRӏ?-K +.KfVgݖ 3=8Ύ׍=o=|y2%qiuF]x;atY|1 t)^_Jf4pQ_"-O* )~\w.($,,?ut3MH+PXot1VkX[]Y#GX6VZݗ8n(6c%# *ORREeУ\zp-L_0-{ôH,kQ3uMm(}="^RvbN輐6X1ه MNTR c/lNx5FJH!jWևɞI!(Lc)'p{tٔ٣Љ855K69<4~ZO  b AhN%ȕWJ^MhI_r-4Cj=_Lˎ Ni"gh)<lȑ9ˆ3hx$E$1-Eck﹍yb]YmYe.R\s@;'UtdAhP:c&DŽqo!N.Fd ,,9rA&t<++ͶTj`E~=?[f_?ft[^yPT0,KP%#rPh4wEld !{fٻ@.VN{ I3@ɳuDGQuZA"97g$y/Za:>NZ:o_#,{"rvG2`&Y* Lr4I2:44cqL- g5pߋ$n׫I轑:xNH *hESND@Hf"ZL轭a'硬֞w2c4|pX!${+tiED$$`)r&[*+{h=bUo37f!zm;9&A'eb nTubDj{%J-$Jʢ /69QRȠbŠ'c7P)e0`z? TT2W謭dUA[y֚K3V‘g9ыQ^|eͯ+e,Xe EȽ&&)h̪K\*aSp.W޹"!0Edᓑ~ 1jfAluD偐rG^)8sBjQR&Жu0w 1֪iÒK|Jq17V)Q|evE{ut"ud8ZjM^,٭I/I+EņWP%}Ųŧp}-X7msh|+(r (|VctѦӓ.+,~>ܤh ƚ}(y :ŧt{1vezDS0xm5w76 llh՟`nvF>bd'G 7]t`ѲNx4luejiP+I@R+9u*%Չ$E $=1_lc3^G ֐ꮬjzի*k ㎉ɶWZQZIC~`5+~ZD7*(97:M8p ەGZO)FKX0;Y B$UP!0(c3 Z d\Oi8 &"#]נx X9JdBpx@1ek/):by\r/R *˔X" 5w'gE.2.3l18{m6.T4񹞔ڴ?eÁ߰nhE"2ڦ]Y>sO9gr=GrȘoDe$m3sʡL+W蒍 Xv(p3«0B-X90bޣwɗ K>$&/Ø@RsNQb#!FiRfby&-GhSp'`<(='-;/&Q`=w6^ ]ː>LHɇ=^h/ݭjj~-uzn67N<̇.0|dц^h)AjV`FNVv0*yWd ӛSF1OAY aPV,#Yq+!{d[V5F%rc3O#S b$%BW'f{= nȰ3')gٌzƴ.@׊L778SigM"v:H5 n԰@6 Fg‡4}4?)YD*mIȗ=Nwd. &{;ѯ.HV2bUElIefc-vU]O)jԞ0)mӁF$X 9FM(TpVN]=,;WddBI΋CԁEN>JB5D~fR⋕GO/}dG YբKDr}p%Iu=_x-t.ԫ un;gR+sNl,l|39+xsid#7U[;آӱwc?4-.FmFD8n.oMޫH/)gͩL6bt^r'ggxޞTÕ_M+e(ou-fjŸKl$ڦٚT|\U$ܒQ{P)ܻ ɥ-~N=-jԾH*W)efؼrsU쪞:.7/?/qR)Mk:^3F1W{Bu`'F|YN&k}%ߕnFa{΃ C9I hzʏv}yS*fV ȿ%YokvvdrwRQ&yy雡K]04o9.n˔oХ;ޱ ̼Avˌ6Ƴ[fwwNSc VcEj*kiub&My+aVOS0^l訅M%=w \Y.Cf(M3rʙFG/6/夗%-HVtA'҂62$Am$h>Yv:zEkRA#uYH&p,QG,J%^h.qQ٫!͍h.iۊC\6s_Jfaʶw﷕oy`vZZ\{7ƕSNbuZ!233D:Ku =Dz|;e^ũWV͸eO]3c ^ t2.?}.*TZU),0(D\#R??%{45*@WH d ]ws7dhqCg{ge庲_ }dGݢ.qR AnH"ㆃb{x{fyڒ|ӹ3h`.o53Ġʡ A)ҝC&/֝(kt W(J%"XXij}u_Im)N|Jj+Y֣ήnN[%  E18T3%c*S<5 C&kF V&-; g0Η N/>r|o! Fx5_$xK<99Vg2jmpF9X\)Fy[]T}Mz&][+/yKI|4h <ϞYiM(d,;@5AK+<Ka+uuS8y1jchi!:p"y>˾ G.tE.i%i۩cĥsi=z_1$I$AUI !=dQ0ĜфĹ1n!m"e-m٪T;AS%sV/4ϻxƬ?UеMGFRhJD~4mdMŞf6L=.jv],{>q,\p[C</Q&墢!1&  =5d < ON6URjV3J;??n߅v{W˧#}xocb:vW,p l"7$hN! 90jZkm!aT&gRZDh[,$K3Ze[TX;v ^g}z&/Gy¡ᡟO/&0ȝ ïhpgi|?I+nm& d*%zء.e=%q+llϞ7 Tx /o#Eme* m.cmS< /yKʁ>\.jXنj_Cշ_VE}̯'Ѹ}P:9Odzc_O{EB]OX_yWl,.&eix=JmQE+wm)s$ĠKOU"*`"^;x7gqGrD*+ QFd`MiBIp\Acc- YDI*=xU% 'FYaHFbP7)uRqF5EU1$.,e[lbWf(XX硶 kd6$ڭc00 X 6ɀ@ު@rk+AcY@#@F0F)5"I<>bW88-Azc=Rշms'jVn ' m;ζgo]1fұJVAlc[B"a >$2фZG".w&;.I0P&vXXӁqoO&G/FTxY' .8W3"Zх`Zxrɀ(hQ2kuAcƣ8u@>ZY25 pS(\\Nw 60` 0RBId=LG?}J<R+"Ҁ{`DShh棓Pf|Sr$(4%)؏Cnav3o90+~ 5gKB xn Y%'@?q935rPs3w=;Swr7;G#xOlεE8G'$: (''JIBp2UN0Ϣ8>SĆ [*BC(leHQ‘}~jWY$PVE gyS+첽[{hB/4~ZqKH^RXuQ75X~ϼN7^_un CQBShNFGr+jUvܓ8~)YדTk{ruOa뻹׻:Yޣ"8T6 G0b^'㓋vg}NGn^k{z]wK%HXq2گTH GI{͎UUl9FtGg8y7W|޼?o_qa['C!cpz6Lv5v͍CZwỳ/0-wK/J8?(~j~wmqG~rj1P+(6io4+|sJK@хT4T&Ġ#ƛ>6\1Ҳ7q/v}'%.4imU~>P 5qP@*jc1J  8++ǠCKaym &sDW]k[ "a)gNHId2BN'uf|~֗W͉U;F2M/rxݪ!uiRw4fghfժ9(tBB/< Q9=2Eae"(tP&&9KBi$IeV|q5aډulcvk!tdzDΚ1%>Vj zcmv{:_, c;ltEQx ᮣ5{̺ s/ffXBC5JLKe3RݭlN%0垹CozN.omKo֧`/%7k9MD&ņ0>v[qJz}/ul%) ѕJlEWsוRZ^u@]lBCR>w㈺R\̡&Jܱu1Z5mC0Jq)+u4w])%UW Un_W C3RLt1]WJ7ߺj9tJq}3SJg?ծi ~; ,pbH>쌱Sjx}HW(#KWaªG=:*[}`%CUy{EwZݩq2o_ 7P8[[GcOYO;ak9Cn̈́bCV4pQhZi]JݪjںkHWft])s߿}2UW ԕnboSCR Xq%+Zk+t~u^9=UWp53g]WJêsNҕg]c3R\nf0]Wj ,I!]o q3RtiwDRdu 64+G׎73wD3nu|x 8 *TRaliy,]CAŵ g?TJ.dvtX~uWz'q/,t5'jG)38'U\uf6F߰{F[lG$;R~svsoy.~[tLԏ-)w{.y=I/OJXϛ&|~qu.o>p`枷חΆ'+ݲxt}U%mDҶ%=U#Rg)dO-9CHL#sqͫ/o޽O֦Fv̳~znM~/b'UYSqO]yWcDO}סVegVm;$/Ek!?q a;,o%KHj'[:糴{~X](:-g (2lC-s||BzmhfA'uԏH%̂Rڰ,,pfGiHWDft+ɘjrˏu])Tˏ j8Ɇtn3+n;vA̪%'t- ftѷ+F3{])UW UH7+3ҕ:jEWJee0vuDHõҊ@+1]WQȘ5Z-09ӌC+Rh+Lu%o)p;wѦGWz񫮾]}'qR{ >cNm:%]?t4np<;.Lhe[ (]Yh:tZ5آ&$ŹftSR:vc(!Yu@]92l}CRѕoEWSq9UW [ *!]c;ѕkEWvA~um238[x0vw;hsguj iHW , KԌv-yQjΓtUym+RZ;+UWKUrB+[Jq-+e;w])+aggp(`ۙjW̺+@fvt%,z91~jL(H/dG&jGgЕztч{S3[ou5xrFn#9&emjq xŸ8i3cv)p.4xtǨBUm-䆎Uʯ<ZZ57ֵ4 M-QlejAiړweujaS  ,ѕF׊o=]W{ͪ)6+N iU|hZ7rnV]8X3EǣiZf3+sוRF^u@]e[f'PN}8ZOsוRj A "5+ŕftZוRoni]c;w7VtɎGIyg!]8،75]VLUW ԕ5+ߌt]WJuaE {'鑮]!ՆS2!H_9Dטd'2xp#qôAH4 ё2lnkWK?)1oHW ]);+ف^FWVG!]Y) 7+ŝx[+~u`Ӹqη])g*MוRʪ%ꊅmtźmFW vMFRQUW/+/1RKRѕmEWMvRuj<{zT]UCsWܕ:*皚JkFWb+R/dPJYuD]VZZ :73ծqѕRUWߎEO'ƈmqaⳅGNHZfqf++Zu袏e,7dN|n17Y6?FFtYbZ49i4ʹ7ZuW!dqgap|q8کQ#teW]=#.{UN"0R2|&64F[ѴRjEJ5,>)z4{T\jۛ3M1LtِðuO?mNw?#g-*^"iUkw֬ݏ{W_/Cנ]A^+']CY7%cwHe>^]_\]vwlonv>>K)/p }_*ϚݟRcϛw.h)WwܤfEUPRߴ滷 ;~٩ŇUTT. 1{Gdˁv8ϊ!<ț/g6{ʯG=ěH#=_}C;TAROzl9 YJ!=Q脃3]BzʎRJQ?q}d;[>?B~rY|7E2PP7sC!ᚆ.${gKL.p:Ox%`QJvcJՙ"]$d!9m1w]}qvpє~)ʎҡ&o|9W5(B_ : aeKJ,gQ#͙! -K"ccZs6%Z JuGa1WHF8 .ΆD۷RK}_: \+ɻW\D&Q{.uk%x4f`m fĘ o\a(":&75w9K ' |F*F4>o&wj9d*IgPD`.Z-fK]7`$!ѪeRzb{&AGJ$J@-b)z,~M)d{ҡ!kP;XRC:>TcI=\C{ i) l}ϙTPfp$7uSO@Z{K^, N\T9eqt-:-eЛ|V06?& ]K)#KF ;9_CK$1¢=Q w~w}sy &TX ypHO-7gcvȄ\=&Qs ]E{JT.0ɚ_vxMZ;<a;-E%UA B;M9\뭫z8H޲Z "Q塮ЅYK 坣~`%jm ICh6Ј Yh ճ#ʺDCA~vPeWS` C^:k|Dt:2tэІ hۣO>wɵ kA\!۾? )JDŽOX_qբXzb=JPCaBEW +䘰|a]IꫡLiZ|w2I|C+gAXr drǍ]Ha8Oh;BG7#TCqW}Wx+"c S0 ,[ݎ1``HDŽuM, ADC?R?֮Czm Gm`&"Gp 10ʱE;8P1yN\B% @4a=T@yl@dptR 7c1(AhŨU!QsMlt_֑~W*z:]]q&Tw>jԅQTW7tZ^WS'F?5!qC(g u6@ 1ۣA7lŦcԊIC5:T|883aKbzi1X99|:bPTR5Cs"@N#H[~vg3@]$I14)%AKRiۼ6 [f{ϽNWEv> Mg@WehZKZ_k׮x=<[ݗU&}mZ$ >:%$́ꐅvFd J)hGJW}HV*Ui`!< !&`Ge %t]8gPRiD.(2ӪAU^k2`(`hx L}辬R ٭.)n q 4E:Ր#+PZhFy̶eTXn*"1~_~w/)~߬Kwq4ЧBU+ڄiDPc ֣]Jr Gzq:EjC*KtPK|̡P}ֽEڭ ETev PJ4J،D*ZD;뒄ܱ, IWBkF -D{М&}N0Zl,j PФ ȚL\"(AILMFB;+JLHP?҃RUq@ơ"2ΪU%Ca!dً$P>flvBU9QbզXdk>؀`%͐gѝr$yhP!Pӄ_ AiF쓊UADE#ж+9ꡂ<܈VCvnQD{}PD+Y.TzʠUbF =>Ad=~`-F=*-+P|]OD4cARvr,BA'y/aHQ 2=Pʢ#6:XLzށ됪,+`~xڈJ5(Z5rNmSȍwZxnfwFO -jF=dIQL=3i^&LPha3LBvZuZx9^h5у7[A[_%pg^y0E D+PxPU8P(-lZ }BL ӌčprg*AkO^(TH,iJ ]\1FnukE!nކ`Am6ՔWެKC$p0r( :fA$mQp* i7u҇ Z5uhX;S`]pB Z 6Kh!~~7R0KiR.US:PLʮzZÐJ{EqN ZAiѭv/=T=jz2~ų%F)zh^rhHVthݤ׻d/ݤr|f 0 tۅtkGK\ryI ㈤ﴬp~Z]i9]l_]LʡW+xjJ/m-N޶*M'h㪭h%ԣ :e~~J CR:)p@@+R V _5J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%З );$%8%U(v{:+NQ Ѳ@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X tJ 03 %v8%}+OJ- +NQ dV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+NW `$fHJ r8J ̡(T@qJ T@2J J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%(.{X^8~hwi.Ge~9[>Bb,bƐKe#\\#\Z^QpKzm;CWuPtute& X)32zڎT[Kqg u/GhƔ;fǣoVV[8YlV&&У3o]̗S+zs|vߑ#ҍ|nkmsLW +s-Ҭ+;Ӓ^szIG h>ԯݮj^q1_̧l;P.lL_cL lXN즱Zy-`u^jA71|?]^'o.3U٣1Әnk!]y?PL9_'m67r.37Co4}g]9v"Qٷ( [Vd\N^oM[^jκ_>Nܚ%f;;1HzCdߥ]gb6]L z^lqùLX%gLSѢv[ӕT-{p5zk"O2:w&E}GN}-eV~@S ؅L- 2FcZ J N-(s+ tC+BW<ULW'HW:J-pGOWrk!'Е 6!) nC+ze8z"\ $]+`,Z%~PFЕ"0 rQ #CWתP^wtQ*pT+b0 UPtuth[1 "* nC+P*tutcnHXI9"j0NW2]}9te>p I▸$]QXDU,oJ"` 8S(`?Ȟ=h0M+a~+E[uj(tEh9v"3] ]iv+z7"+B ҕҩ!8 1";]J#N>LPtEp ]xPF)ҕH? "O3Z+>"S+C* pP+CW@kT]"]e+` pBB±4**`?]ET;{>LW_ ]z\hi`/tazb~hIBđIte>z~1_ #)Z|C$wC>ǿ_z}s=XnǍcKI(SeP*(S}EZ eNsh9]ΰv>VЋgfgųՌ-/_&Wo.~3CJF7ˮz0UULJhs¢K*y<ڕ.JUKރų;ev?$Ҙf$=㔃"n0Z6cJ)$jଖ4mփ=&\WZx] .Veyu} 2n?^/Cx+ Vi{7yvc:ljV·3\0M'ou) \FT/\G}1ύiu|v9~uѻvs;.',|w=% [D¸9$rb7GXؚk~+>'~on9ߍc?bmۿ*ۦ]L,ޚE]_-iUNV:c v2 {l^λ+\t8:WIY7V(uuۛkrmGmB՚j)&CR/`Zpi\5ҖYEM@UPڶX͡Ś1>זDsw^}r6CBw{`?,.rȯ7f|nsc|Ӛw}}x>o\Rדtqv]SW_{Onn6ӳb1j1@U8C-ZA8{y[ia |,RYCR*ݫAfO/vB6zl9q-eS IѧL5E7gF$@1c&&' )BXT9Բ! hIri6в{!n^6R/J қ^RUVz|f2VﵲpNM ~Қ LE053Wç |y~Ww/C޾>67xQ]bR 'K.bUP3F'j5GoZJo[f+hag~|UzvR3ϗ<}.6KdlMK= _@rfrgɟ75 [H1JDhK)<6)okM)0J'{UD/4Ӕ!4D[`܎6l7nL76Ӌkog?!7uPN{MХ-NR&V;&wdjvwh6,stIc cIr~n';&nu?{/ 1-E&(H0z`nw 4ev]*+>цꥩgnrjm݋=;I38H,RT"O@]DZ4Z*#ô4/N,:}{A-m(*!o]a0g/ZrV㙺Ozu`xA|JktT&jK"[͂~x1IrX!Sz(ā.^3گ?ԭonm־t=C(uV-ۅ'=Sd-߯Fc'+˗B*ZGc,Lr]7Jx?7y53xAy͊݊x`l!s^Q2Ƅ@Q$l b 2)igJcrZQ2aA DXpKbƀEIDb- ztVE# J9R- Hkz*uDMfQEagH@.@@)8ڴjQ1W(rneZGEdQ0dbrX+? F"6Z#~hS tFJ+|ɗl>4_Kg"ؙ>dPJƌ*Pq^idqbLX=Oļ3FVsѥua.=hƎ]z-[.Ra6*m(!q6#!QpZs< xu <4/f '9ٴW/OمM9"Z >Ϥb45M ;d#R3+fVT1)%k a1VYE ,4*hhQ9b0XJ,NPB2-Kpk/[o@'46-}$tf(A\!gBL !Ȳ> 2Meeb&x) x[pIy[(/lxQE| \ycwSчK?7>`N/#-fHrozR;/8*Ɋ7$LA.JD:<d)Y/Sd{CBY.eMh)u#&uu,ܶ1[MbZ~{ .jXy~.O,qίU8`wqy&âR0@ꗷYg,Km/9?ˀl 0)6A# 4r<RN96Mld-< ; ̕_$0'OUf2d+0 I!5JPCz|EG⮫a~D?_yBmQg܇< x>EbgXHNSDJtܩ;3E> WÀP& -,I9b r-LH]ëEa8Mc,/f#L6 7CƦT$,̾UWj|RZ3xgSə۫ 9*|嗘?gV` ˥)Urﳵw[gU^?K7o՝7g^}ժqv %\OazR ٴ<U ;× h#]7 CYaVQ`,|4'XٿOw&{K6iإ)=!e$ ́}> >g8U zT S<Ϫ޼N_oswWݛa 48XΚH$x^Њ𶆆CSZ6ߺD nbm-ϳO?Oړ-w=frTOFWRC#@6u"/2WwQRMxA Q`Cf4-~vRU#[I万s'E 8JiDF91 +8Hq $q!mH3 gv8s'~D<|G0ե$8C^"Zauw2.$*̒G/봲Sl|* vm,_zónVu;#h^l#6E8Fbe9,EwmI_!6z~w〳}yMr?]`Sa, Iq߯j%Ek$9׏x(B)#>B=TR֊$ *:)[3e+tfa)(QzMYLkgt̃/i\\3+^Qx!q]䐝)<օ)D+o,'#K:18w/)rH+s}|i?N0m1Kս8 U Kպ1^}*rﳋS||4_\=*]q\̤JRݕUEyre$Sz}y_bt0hzVWEiϋ) Pz3Ihnc!! YJ3 Gy2Cp\l$K3`la-o6 [#b"rii \99K$ YHeØ1 .K0>5:+f ܠ. ϚӟR"U@#ȓkiYQI#_0<N^WOz#; jZuqL$a5zgwDt9o.]Y(+o`^1(NEDz ex8s4ʌlI "@ɑ6l*.sRHl%89׸vBjE@ƴJc\ؕ E\{.|P.\!.H i4.*z:t:|GflWJ rs&YxtѲV: QAFBkM 4{[O, {> IAHA) ."MVd "ʙsaBӮ9Mx֖maFǮ-fmٳv`o=BvR$R\9(rfU*):ՙUn , d|H| 2*Cpe5)hD Lm".Mbn |X/)dHca[+#Q3w e.Fi,lltwxn!t9rl ?՜E縅\DޏJn~D[Md]VX\|R"\G\VNں]7k%i$<] LA+r;)/cA*((Vl #ʪNAڀ %f 6DEVlƷuy-p'6X&+uͣ9XVfnljаusfg|_>9_AZH!w6y"g͹ E_A}O~~M'VqۼWb]Aܰ`z?yNj'j,;jt0w.&|~G-U'yf>wO+УgYWP'˓Bz~xķG?4:߭'Ҁ^gJ<@W_:X}}>w͚g="‚9MCs:2YrE)J˵֌slt-U>L̸El?\2 LQ&98"7])8[qFHmXmL:[#_P$D"c^lOg3jح<9'ZY ܲSПqȯCIѴMOI.Y,"drV*OtU"H<Ky n"]H`jO]k+H p& >.=4zEN:$ $IU%')QVI)C3T9KB)먯CG8Vꮬj^U֝*a׋v==tLtOzs`8_o@NeS,p ,USxc!8o\drꁲ{vЮuٽX{VdVRP/etN2m M:p2jrݫ^mJ՚:2I?.;^\KBRRq:$n/JCBoA6]ujcswzzf] Y˰{[~QwCܢ7oOo 6c~|(WsQ'ƣj.j}jh.|u[7]\NZh>3cFWnZjf;tK-LWn Z[KF -gxK@k\ʫ ]!\XWrv"gIW#+("8>õ A ޖzj M|v0׋`Y_RH:AJ*]lOV NjOS56?An:Ϲgao b99`Sџfg/v<_r 8Xp<>URZ2j8<#ˏQt|գBZWk~b=u8x;& 3E~ 3D%k1u fg/̙<]-%|t+@鷇7sXnܮZ5ckv- m+;s^c?c!k׊5Bb Q~m,Ś50!BVtg+*"F9Lv3tEpeg! {zt p!;DWtgBgv k=k=]u̖]oL?=] 0g~p9<-]|T]teAWݻ^kg6? U,rutiP*iҴ,t]+\gtE(xt\J7:DW>BLj"V%9ҕ:gG ])PɮtE(%ҕØ]`ٝ#LW=]=G2ל+ߴ)"w`?%k=]=N\>`$FwkV;\t KAs2pUwm+Bi]oqlz~p=-] -gO?eA{=]Vmr; Z9FgElxZW}0ƿtF@x1`,ynǎ4rq{Uv>05gή*+)~^H-i^֩"\R5B4)C^e]X>:_n-U&𵖥0ȫgnwsw5h|\sLZL,xK''krT:$"eNZ-dT`>()=8sx:r_/h)P*IHʅ@hC˸@Q%Y,Ø hTd.%8hUb61+_~,2ΕVز)BYV9i>'3zo'vͣ17ڇ2ۢD#$?ϏlN\er2,_s jNj٢]+k:,x69Do}mZѽ ?Vok(,=0>{j ÊZBfٸ?to'q9)-ɸ@zhb,rϽF؋F(ܲqz 3b Z:G<7.bZ2YB}ѩbVeiBʦL(Z+.A!ȵJD [C+yX11bkzx}ot*SzU%j")U-5=+z:MØwxɺ$EGiʰ0:sµLGԌiO7g2^lT=yo'ceaX&|QY xFϔ1dkU2'a3dgzӉ):QVI V+k/dmKE>=s爤G ddex}h4(zNooۦ}хPzQ5R2O #{ %^mt QՓ%޹YU1H!q LhMfB<-B7"i:lh)Wune_buR 2yВgګע?c%F#漚6'eD̋σ`ݓ\נY,'mbp9Zr\%mPm $DMFpO%D]b1քc'ĥM_<_x9-c@ǃ"" 8ipvO;!dXf؏7}Smx`8>k݇Qcqto>m\@jX)[1F$$}Kx6||4+Z;Zr*EUbJNLٲ׳.{o<.kB\Y}lTvM0_-;Nfd}s[b&u`g Oe꿹=Dn~/(b_&b7zt鰦To9lzhB3M ]&0,lV|9'5+))A.Ѿ 6hCC7o@kn]p—j3k=hT~!unG5) Vx)g%߉|@n&Ϧo.u=Kw=Imz4205tg{&[(jaJ{f>oߗ.b:jzl63Iar~FV cڿ:,D0_9\wiܭ8U"cR:kAjQ e_%JwW2r,eZBڽn.;Jqb#-ݧ:92˘TdrIZ˜XpB1*U rQ$'6#\n>-bW˹$^ ^K!Rrɣ V0j>&!}nbsV&SjCf0NEw++Ӂv҃WJ7SC[!Zk5D?b QS,EչQF>O&{?ՋTu{IG!Jeۜ 1Ș@K'ROBDڍk7Aw<ƥ!4lfWr`(PC[M79{"'UI}MsFT.BnA*&7CDCސޑΆNlF]͙ލ6ܝȣ ω2$1R a7!Jrq*LJ"Τ^cC9vL[!%.RY8zA!8"C21 훸IyEZg<=,-NbœW(4Dzr9 iX^dm_۠Vֲǥ0A4fR`Ff$xih6%IX0<*6F-Kex9ZcƠ˜g D :cg:y43\ ! #ɹ\5)i)2YM'罰CXC,R)%l-K8)ySÝ9#=DbV&-%@ͿN̿WMNNX+&5u ;%5v0]&5-wZx!w341F[3cs[ֽcE:pϥL+njTYPH;H}Ϧ[v;ywyѳs>)ԎV(Xdtgnx)%l#Zn&!}#E1I8i5jtͽ^*p!7_|\|_.C)aSAIK7-'!F̠49+=#X o7ZoAv ÷_Ӡ M^.C+Z+VZ=y|c-7te xx1+SW@Xujՠ7pNV V(r̩@*ߣRƽ4q/jx`T9RX({)^@C.ӲI*CIjTh=tjIRC +EFYqU6J@6Gs! DK1ܵa9Fw!YH;nl;A=A/J7ܹ *Ђ 6g]eyNC Gҟl#uo'=``K:r]ӈ)t'ZC)Ljq%,g-˳dxBh&c&K(.{Y_蝗;zNARhqVq!5FvY\mUt{Sթ=}9s;c6E+%xB6I:fWhfw)kIsCKpk-3&rn-i`$Pc %xx:򦠸7+_ Õ~7`Tn=|di9L'& xQx<@3 Y"RC9JCYq׎HpOs;!,Vd$)L]d$&)$4EP,˚LD?cCeO/,0+mQ+$a.Qy"g^d )EtRkC+r6$2gh9K:pdXhGP$":I@$)cBr hbNk<> &H/)#)A5'ˆfet ƨdTs2śF}&D#"QIL~D&|C²& ud43!162tۄ%XehMdN8)B֑~PʝP3h?!LJF0ϜN 9w*kAȐk<ezaG"^H<5VqnqWr $CG 3 +<^NLz9B%}9XN jŰ4.6-vUKW$Ip>@_T T߫؀xN漏O`CH2kϔ:LR/9Xʡ&QTXKt]jӾ>ԭ6iiI!kc?Eiɺ^86Ęʀ=>mBpA[#Ldyr3Dˊ eѢ-( 9GI^]*9ZpR2 z,Rs$'|l&p\_OhZo>h\?4s;2W8I < 8M[I9MisF_J" qN|ᾫaqqdؐ=cȓL} _4!)⦮GqG` L]Ska:{Ηk]åy׋(L[p;>m@; @R|Q~8P,~ۗIQ񰐮ף8˯4DAX4ZV l 籅 8"I! c1M68IuڪZ'eJ<[LD2B\$!PH%+B6E3C9pe|'=vEҺ!s$n@4uD~WX.2[5BN:4J5j>ѵjGط;_FMRoҪ>e2 sX75O,-9yS+E$nN*hǛUnT`xB 6Ali$[KWtp5.G팶pW6:lr{-+{K? T k8RN[cJ9Iz*&9sq%R$ ˠo27X.ʻ\9M0<-48Y Nh0yEsBL-9 n~:>~ӣ?_GW kq+/X~)nn|g?2ebD8dn0on"sW.rT֕w{BTO+8,:ON\޳xCCKuBh2hALJ:b̝hsb}`2 d[հ'nn tDt)[0K>!y~N>zB ʫSq{ɧ9 5MX h@@Lvyx2$Dxϔ`PBzV=h3fE1(zi0gzFN 뀁CB-)gP 2Ϭf(&rJ-gQ"%i 䢮|XGdѦ'F+c>S~V 0L܏cJje|(C+\YBpA^hQ9b1&DŽF oΐe' ȥX&*Zd҄  CJI28fΎs{V{ZG]ŜӁ°V,YǔR5ytSIyr1~_8[E3@ɳ(LDGQst@98wZy+y~`Q4!zUjY'q3?egs!8E6[g6Ήe%&M0D4襥*ѩ| *3{L rĸ7Xidk1VВ6K1jlG[ ч+eM;I-,ohFY"Iܠhb5(@'BѧL?Eba-yTc.|'Y`8d4 e(PN$С%{Sn4zͩ4W>4ʛD:4zNmJ{Jbʢ>*d0{0Uċ 8x]"^䲯"VYg@HY1[Sz,fH)}]̍`. }Etշ^}Aa*DɁňmP`9^i_?.@)ş 4̳(by cޣ3XU{)|bI0n~IvVE \, )ĒZbPZ!ȠJULlJ.%8\Ef1\]#"ky&ٱ܃F;\5n;Ӑ4p0$alf&D 3L^jQINr/Et%L39ͬ]I*^^L\Ld) 3%kOkF3S %zˆk.oy{nv+{}sf ٫>C:~5V 2˹v67 ,gԺ!b\V &(s~p%lJ8@J8@'J8(BncJūo47NX-3j/m.Uwȋjt8)M:Yk\['2 1ea2E!My?J, sC# cH:5/?D!)O YdK7^Go-P{Oe-sv8[鈖u.fTGf|utь.-[6{3na*żv\{Z` QGh&gaBf!>g*"9=2, IT1ѥ9 ɹ؉@62V3~dUkb½b፷9Œ7~=4 wwƝΆ_F)cp$ !dgH-i=8kD%֊1@[dV %sQMBPphF)cY(L M)`v}Z܏~tTv"%4wM:)lLI\ Ę;49% rG‘0:%cY16'X(Ѡ7+G~H>?0vpo7yZߨ₯I8\>o]kV}teiOuJmMrN[.j,]^}sp#pd~ľ>닇ӬsnޮLCsۇ7 k府/d\8/DlG?Vx>SԘ^4#6,˅_{^L5=1e;͏U \:hvo%[|g=p(ķͬޯQ7R}K)_ߟtx߫&?nhIۻ.w5:>fW+Mr˓7UhxU(O(d4BGB't|eLp1x "HȀΛTPȼ+^g՞Î1$+S΂,>$YT,؉PgO)Y& )p<^/JXΚs,%zK%1ŅGzmwRoO_(*ZkiݙI׬- 6>R=YB@6ZYjP!>=)}TOG)-1gzz]s?r3 Z)9|0F ѫSDIˢKHHA2!NpqG0>/aEv_>{)z(/T*E' 'LY*JjכuM0݄2*{-O];9S󖁻w5Yz~9 tRO_gi̐PLou ϡ9/sqٛoxvyz.oah+`K0Iv4A7e4f0}) o';CN+'CNsbP#%Ҩ4c(Hi@"+M _3R}톶y3[l[RǸ5Vvp}uޥ5{ zge^.YRI;B!L#5YS|Qd/g&(L!OmF!,kg4[ʂҗsYUD=Kpoy\Y__fWe͵/VaS.8Ek+C  ,;Wu#IFOY7Jltr'$\PfF?%^)w|koIw<|kl&}=3۞`k؉&/!bad' 36I (QxPFZxV) RTK#,8wֿWdB yKaIR G$yYO7L*9Dy:Ҏ#H5z XЅ^^dPCQ+7ZBDWmՌ=K094{0H+!*k] ỻiY & Am)~2fof>g&5QOV :m@g%q`5jX?4c5"`uL%褐 ʐ)b>5Xv%ftֈW`R)"C8Ģ-XMdL&H>I?UFMq`tW BE[z]~ģO? r|@Ng|Qա"5<>+c`%HXa]L֢IH PgO{0D`GYP}rKDNI"J@|%YP*m IA( ʶԶ75x+OB~zae>BS8/wX!ۊ36ri|N|R<pt)XRq`-RKEP37.` Ơ5IB$ewsI0d&xYP&b>-blt&Qo*=ަ81Ez3뗷;eZeυꅽ<ߏwTCFx$QmsZWP2xU4b?a7+4))ktj#PRGVEC8zo{y;7\ptF - L* +p6kF⦠]2ic]{gy r] C(bPx`IPL s|F}Mqܹ~1 o5)sɗ5Pok* ޡ|Um/ojX^im./@1o̯X ;lk4DBd\5j#/8i}"go.ӫ:}!eRN0m8"VW姮Vzu$}հV1;OjC.kDq$.o|V/@vhzt랺֣Ej^㏭ꯋ+?t}u}:8?Vy1| ov@[mUO~8N{ i$c7#4<̚f}1QuI>N>.z~3j͡8.orӨ{#$؛2R6}<92Sq"F=vjoEg(/y;7??+ۍHR}B@Ȟka`A۸+#nQ+&{)Ȳ`de?ϟo~w?2__```8!Z ziW]7jx܈-^u}kj{KS[⧫?zNrs5끓-QO8 ↱[6t:`R#*TͶPXQ ش}8 bma6ҺNb|gn'e.H4kN+gB%BM;HbC*iR) vc;WV6:Zd"pr4̺22j^yTTNf-sԊ423Iӫ˶hM-Ď!;O#gYE ϺCw#%S+dt+*gN}<: K#0K[Oꬁ02QK-L2:~_CzF7:M^ .z0X\ mN66ȷ5c-ճM55 4RNj (p&m1AΙJy4$` 7]MT6Y|LKoWby!Ƅ* rRwG9on~~4 bͱݢ#+!ɷ mTpb~^k.y k1wQ<`r|#[N,5I(ĺܳi쳭ٿKcI=9[H=~0zO3z)k }F3ZEz&?_e8SXPߎ|zt6rݛC+A(A ,͓4~Q!S2| g8g!v_;/Nke⛙6*(~L0|ytPzu5zl?|3pn9a@nݲ\Pujp-ꇟμ(wyE5#=ӫpR)xMQg kozpI|yR@k, lS3_j2w6a&| @?KhY+LW֘s_Uu1t}RzCZyru.M;Ǎ4ᣱ&B'-Qj!TĜ7KK74'`6+Zr0:﵊̘sjfMBҐƉ\JJffڔB$2Lye,w L5'։$ N# КLvʰ\f$1+S,BIE=vnUm6Xf&Bp0Fz ϣ :טTd:6ӹ} J58xlbLU*#H)8#<' TЋ5 ČMbYKaވDmj21Q`=z*w*nz ݴ3ŔZ3-bAc0GQ@=S(K%\;e"3sN)w5]{6114.N_Až%*yC+Wƣϛ{|Ym<vaKZo@Wֳ 0R;,K]pcrL1j@rn~vO!Z6kΫw 8HX)K-ˢR[EY+z9h˳nO@"iN%1N>欝۬)YdW*KOR;+nw&nSB8ϖLrhCVTʤ ^:WD2 ؤYB}]l6MND[P= ېd6~ɱ@C:f\rcU^F51;j:r 4oڰʲv*iWYsm؜lwZ[lHtTz' 2QFc[,IEN$k?gT̨1f .:T)I꘭gѥhrFI Hifl  *qac-c_.d s!Apm6gz[:'_&az> Jw~>><~Gfl{=dLQ+J '!܂FKRHaV,fb :#Lpóπƞ dg!mrƱQ3uێIXC$h0c7~a\11ڍ}Y7ڼg^޹CFšL<뤂,iF$ufʍ^ųaQ>Ժ.x# !`E4pML$( cO e=s acpʨL+?k2hEψ=#޹WL8RM%d PuB$k=79f:(Qi Q9IĤhF4KRNiNAYд318[:)m$/rj#@ɓT"㔤pQi5# JDcZI0=N<ɞ. Hm0̉dM`p@^j1zh8Z6Dm xI{-T3h(PaLTE!(# T@\H}<" Sgب=kj"v>p[d{촡8$08pOPBm,T1a5 j1lYwBj5sc-da9:AqsUL/p9.)K=d8y@Blr`n>h4JXoC)47BH;ȥuƔBYhJAF&`ouH sd}^tP<%k~_)<)δFIC,2o@KPݮr&!S)eR莍Rۚ>*ؘkv67_ζxHqh.u~n5WGGRG}3Di+/ϛI6j ,+'Z2@>1ًgnKip&Vl }&>⽙o>u 4ac7 :R(JStOpիq: }߽0.Ҭi Bdt4kpտ+<}]L8PI8c F܏ Cg C {#/ 7s-uZUwƸ~̻j󊚟-Upj< Go:P:!fw1LZsR@kӞ͡^Gll߾Uo׆vpu❡+ٱU\6'CWv˪' (Gc̣Nh9 %k]twkiɲ;XNv.by1klJxbVo]B0b c`DJ~Jxb<֗F`=`TSnN3p.W>дMj:8 J^L0 GxWx T߿pP1he8\+aʵ́Ye>g8KLzA%fgм ׅKP|wؕrBTU~beoɧ%k6ݴFz(58WOr;vA}%QPʢ:U63K*cB6)"9:a.kփl3)9*٘KE;wӼسxh[i"!my?CgFh2'ɌDE.qKoaؑ':/2!ּ3kTW\ + mw- J*{3t-0-+,Hw xB-m+DuOWϐV63tp-]+DM Q*3+a1Ct"+iW Zvg~t% ev(]!`%:CW\m t(!=]=CRFY%cPMdw5+@Kiʺ]i 쒫supyg r%RCW n$QWw&*zcQwZ# ]`Ż BJ6=]=d׍ -^X)\H*MKv(7% ul cwˏ#ZŎB;Di}j~R} **NW!]qbɣ+lEg RBW产gHW2Fh KBG7mHt|JRE;EWؒeTth=]!JzztRv<RBt(EztK up ]!Z#NWs+9~n+;h(9ҕeh!C "w5q$RF7 sX`<[{OXJIjvdc_4)gRyص /3#_ΗuK@v,շ=q酱rj?ϾԾ\~^s-2I+Rꡏûǡ`[|]?|Ү4+=topުejُ:{6I{NŬSq+d1SL8"{ϲeѦOpQ{##QIrZY`y3+ 7,ڜ}eAV?8,E7DW8mg!T,ӕdtute +bVJ>&Pftut圏iK SH+{6ӕJWHWe ѕ)wp7v%hCHvJͬ] ZeJWGHW҆ )lgJZ.(1U62 ] ܸvAJ2vutqڒ -fJZ:] Jon芟8/`Oj?K}.~(m+VꡏH6 Jxp:5![+';,4n θ`/đ+Ѵ}[Ҳ4G Е}5F6NWғҕx]0"XZt%(]T:Bۜ7DW {o+AKJPrR:wCGV|GׁQ*ל^/.`SGfAvg+7݅'W^v{.^i/]%{(GBg~ӫ5](ܔz1~mD{Dɘ75wU.No.,`sovMIJ3/vUޕv~a ,gR}pC~O[|nܧipEk̔{>mI qv/#dW?5+ #B==} }zTqO//ۻӟ s ?].YͣK3b-4d{ԹRkjgHBbROw}??AbYćߡu~jFv@]^p] < 5c2гi.cP=s W}~5f:JÚk$t!f) gK[<-.f4=ŅFz2 nzwqL&c]rJt i*Jލ!D kc1LdkoL b48Ne֒mɾX9,E&ur|[HX+7D&@&o+e?ȕase'cz 9} }ck9{skf<[-ńc#Mfdmsecvh{mm^ulM SC1Oi#bHƬ9S-X\ hs`٤1%tl Ē٦@L{nd0sK`k "J_k ],Ȏ^d^i"og0pĦDˊBi]L&X+r30س3TTPt4-1y9lxjB;G첱q0HP:CPCć% bR±646Vb"$l ]0w ώFe;R`T" \֍ـl ИC*"|ٻѡ\U+pkK` Ŝ`' ɷ5v \JU'g\2972a E6!JGyM +QJ ՞ꇡBV(MoBe|3spA2YK6A@&#w(!Ȯl=a@J!7CAJՎx@LAAO@&X !,ʈLh_#4OGaMx :p2|J_7#Jdg*ֺT2I (yR@ qèݪ`QBWE@J$jC2/72ѺTٍL1ZdgxILPc5Q6A& q'2CpO} ,Plxd*7q lޚC]6CNG/ a~|߯./.ΤɼKȓmc,M&X'Kе2"1XA0%R_nyj"!-]%@_141#xV;d (ʠvLf _!n[uնbhǎ@HDGwH|B.u"RjF_oo%_AmnBvtmח=mҋI}9~ 0 ng'vy}wz}sKe( /̵÷i\8;ٛ|xvwrq6ds[ 5en0ݿS /C[bWBφT>(8 N1s>XQE ~0Ɠr6ꚿc|` YNFqU췫K -sap@`YS ~\vR`[>svP~⿖W4Oh_~o4Ue)cؐMSi&[X>"/; y1ގI+-e-v[%;W$i Q^kjE4yVG)VD,qqw_*q2z3mLv*Ry"ON8frTԧ"LG>a4$ ɂq4 '0Q )|ڌߖ*y*\l֫ɆDEb/ Z$_zE/ol4`s++.Y.7Hm$Ph -dt%hj}hQm?އHh*fT*^m+˿i'ͻT@ Q*zztŔR-8 ZwG])t2ZFNW`=]Bkxr*]!`ygk v-vB{z=t%KJ3t*hj;]!J3Jj#IU4QWVʶUFixOWTLSvݡ WvTzg0HWy]RW(3t΄3ZiNWem\OW]+g B0~v: ^0sϻ:TgP*ٲyW=]ʐZ(McEPsEyz gK4i M3n;M烥j4zhM]!`MYg芒G`*iD*XOWX>Wu!ݡ BW*WWe^%]qO6j;]eqZNW=]BQ!ʀ ]!\JXW*ĐQ|t%)cKtwa3trh%2JzztRK`l3jL=etQ>vJ3#;DWĮ2\ә և3JnzzteKΠaBwbWărzg0WCWbϦ?0tu\uah͙ ,v%+MZ+L-.*W} "* Μ0sP˚վes!(}#Ctw2;NKz;y;5m T˄ҭ P(~UZ 4XoC."7,+/3zǻo;f_mWՕ0T?#9?9??}o rz.RetŸ&lo&?g~Տ[>1fhG?cؖp͇7L~=o,:'0d8_Z}G僯o\KKZ?rp?]p5xѐO_#Yp \fOZjAoMֵŝb݀7>2#>іro;~A |o嵥B^Ep2<.hCL$`!1wIt`s F8P!R[k83gqz3p-1D TTl%vUhc㾹OP(qF<uf.˟{CC7"㧱+g5qPymxr?7k࿱nzaWY^+Ku$q -E{J7o&_ήי\%*2z?㱻}ޯJy' {;,06z?ǣE̦Uo? KԼbh?ؾKe}vX-Z 1v-`4rv4:>s3h {Wg.XBsxe'D]f% (.1f;ϾRf]ӧ0!_\cD>%[Oq6ñKXt<TZ{S7?cBzr + W~=^`(Y?U!}gr*ʤ4H]N?<*qXi< ˩ rŵFXw5"dzsjjsPq}M2\XS_{5b $Fx.QsJgI@\O cUuS1HT4\h"!69kE'r;3=vi"b)њtR`m$R#4Ѡ"if `O&\vsf4xC70J[pxqa#e9*Ǒqd.<8]^0U1|u~ajCͨ$3J gX*El /x7>ΓPl.6lޢq2,e<gD6U(B>]6瀍=x[I(b-gpmg^"A-bY"at ^I peudT(D n\ԤxA^AysQ"JuJkg i1W"U-ѮwTlpjWyEO郿`~=5Bu6t;ns| 4c;TS|Y9f<)W9pm7+?xy}yn2Y?P֞Y:*]Nh,+nmvkvgsOcۘ}P`SWϩEcug2]ٞUvGC6 1e),g Gεa:ouw{~9GB~ued2LeR0s**Fbqlj'fU / ʼnow< ]/DŽw$"ZlV#^ ܒrܭ#mg/Wۜe"m6,ħUuYߝ6#)im(6D ̲1PC|m8ҢiINy)u!zG=6G\HIYm3I$y4IP*6oFO-c&,p1ӨTS.CufВ%MY|T;-8zўE\"FDrP%$. H$haqpyf2T8bqq6m,lYYrXw~^ﵒ8D\ TRez[T:/Z\DͧgIeS&rlS w^{tHVi`!Ȑ{ZlcY0/p($`TVF$!R=k1֪c{nw{APFbJ̳=qyjBB?7R')+"0Vs"  l -"?w<ij;rc^t|K6xʂN,JFZ M P tm#a)BZ/uؙ"3Qbr(PtrE*t*,! W|0DVV? &ecPleKQ-}Vo: H/9dQkC+t*uR ԠP('ktt0H]͞瞏v|5GVf(3Q'/HOvŋѓT/EOֈ^:1ɼB=xG);>uo)4bc& Jb{+x+){+,6NV^=>l&m!]IF֊EWKZ" BV`4ZcnEsy<8BNJ}jKZ)mQ^4xc][l P]z(ٝ?.x5EgI/s7TPfp d}Zm>,*ةŶn.K6紾ޕx֣ 7ך|0&EugFEeF{%-m|~\ [hˏ]O#RגV{ߍj{;t& 6)py_ZYm($~W?/\wSF?9d -<0>??>t9Nmқp%Fh]OBM<]&]M]Ly8kǛ[>{ /\0W9#ovig.ߕDh%-vuȨeޝQ5 ΋+[z?G&o47oLC3{Nu;۸0]gm2/I>kަ-ڶr#-o첝SulKZC55퍰{'رA=bI BLJ:-Tʗ菷;;j۶ ?'Y[av_]J I;$n8KJ]j$'C/ҝ y=wGY}tn6q68~Yxz]etrwT^ۤuz͇ [}8ђ/Ncې5R_;EcYܹYNvO9"$uX'#/aK'>#E2 vߍ Ϧt\age|, faLPUj|Iu\K52#Ġ5 %@euBw^GTB"S֢y3Ǽꨬug\΂KMwh]m=&\^,tǾ߾㧣, ONT24B:`:3}E-e {oU(m!.= Y-QAAzϞ%:بJ]W;w[Ft2(۪Xzw[8EK2wY>4B9\hێf{wI1zʦbc S $2$}|}^;(R?|kl']=3ߘk؋&P02$jKbR(i!R2BNpegR"iUl 1jb]ʊqmc.&KBLܓi*/#+՞2{$`brU)m׉/}t~SPDK9 i~(ƺ`7.FDWih׭%6ň>01DcBTdB8GP,H @W@:C }8'c IRuwz&E*c:1g*0I| Ghn#r'/ȃRdUSrfG<bEe WMv|r#_ot6@Zuxcu?nAnp6cGxf=S`I/`1sX*4N ZeiHw5oW$q=*8+>Ac! le6QlKpvi Ou؛w ]N /QUGR][/.ڞ*l۞A5藵׺z?3?aY~~_t3|x& ]4]Ta?7yu[,Ty kUY7,uWn|Sh~w=]'X>ysq9`o?ͧi<*Jblq<͓Ԟ!ΡOUjbߤDE:x I4\iQDa/LhuX jc`C3\dII(2(R;:mEԀ;A졛5'+*QbhS߾6P|48}Ӯb]dLr4o";bJH33{.*z'*z$*!ZD>*4*B`}%V3MJ:vXxm,Aa  %%i#F ]4c[dLph1;w̅_ŞҫΗ{TLP*v (j\0WuUuσWqn)c{e{GH`}¤x 7R(aPL ot*˫P@5%I. `|*(;_DRDᵯ R=AO |;IT œZ~܍;j瓳ek(J1 Uu4kZ!>3KJh%KS AK鏥DVM <O4.x֧dTKvUcaQܸ3`O~mUmuֱ`kXE0J`uQ $xN Bi@vWncD&aإ& pRRJ0jJ£E0yH$Hz.p4:6 dj0p~Zc(kWҹ"Ī!1:a bdJL" RsGҹCo d21aȘ1bQV'&I6BbD: h l4o[іDz.sɓÂ:%D(9E$zo  u0Ĝ2DUiմ}a.E]V[]"CML# |GYq{=|yxP!l̚Ujt~^:-~\H#zE{5DU*R`OAw#5y~58J՚#`Oyy Y/S\bMr0 V\?4~hƖn,@>ŽW;ׂ]Oj$!~ǰݏ0ۏTwh!'5FQ3d<|_ TF?u]Oصr`o\H~Dds3į4+&M-UDiTN467~xu?;}w_|woߝSǛw?|_ppK1cţ槃,h|\ P׋];̇ͧvh~5O|7g͞2K}7[ֳf6AX~F3lf gh﷥bFѕT2CGvewi_l#mIIP&־@Y$T{$:R bp)#(E.ఄ?۸8MA|HresN9Q=syØԋ SEu-epD^gbp=mtg_w*ևȃeWp]+X J҂l;\e)7{z1p #;W9 ,.ȮUJI)U.+y* `)ά]@[v Wa+k7/WY8YZ`m,EcҬhR'cűβZZWunDӸZY'oϴzΤ_xfMBWq2=[X=K;W"g6ƀ6׌WsWTkpvx.R%mZp'QYƧ&œG P 5W__ ꟯#؛ WG M$OvBťAR%36VQ^9V ƙUU V:)hJ,:.tr!@r>Q{j( Ï)U6 !8J;4TB%t<bPn'NtRܟ0chRh"hq~E;2l}Dz<<)>(e.N$8j!!2dr"hsYK!DdHZ{[r+d5ӡ}{٩Jc!H!of9aԚ SqrF1T3 iIg’ȵ"!\yϒDG'#h\^Ίs.״ oGQr[k:;V&A $(t.EͥŬ36[CEH_M |V3 g#`,zE*IM1a#I{%1SbTGIă#Qͽ4PQIh' "5W%GhUb:xM6AUs 1THTF%q<[dZr\>%[[-OY[k|PV7,hC<#D=+@`4QA 4Ly3s[ku藸z=S{!o| ej"SU/DQ&~@IOVsdYԮ.}h|%y*(DպKrJHeDg}pG 9AGytώıoKuQqZj%51sÜv"+Kh9%<*$ڢ.Rgm6 ϶4qA0;c":OZ#EIE SbOnRV*,Afy]-ϾuȫY8GԤ$vYW8>H|/gjn.U\J6u{vLU8!F_5ٵBxQͧW$J,Jp*RT^k0aeOl.q qnB'cc}ÊҳoZY4cfԺg;(!bdףTh#VwJI✻JX*cC1Ʀo"ճeZvFӎɶgZôHʟ#G5,@Jq$TC娇UƓ@{:!>//[v^ ݖGR~[S&I \{@I˙ 9Pi8LD) 1LL451'(w )K$Mn| f1q$SS , *͂w!29m@k(8/wǍP]nKd#ip> ](<⤓D9j6 *'^đVyr:{uǩ ͙"RZJpF*gx6x #h3)磤dHfd? lk|^fA22A~P'-ZSH>Ŝ? ;oE,J& 3k{MIVyK)j!}1z/:3%V'lu^^F䲹8N*4rrf|\MwyJ+tA ‘ s#dr<1vyi;~9a2[bݭn_?8y6Wܩ;Vu`ƿsHQ WMklvk7[MǧK+j<xvO{?Ͷw4 zx7"-oܹ_ɞWL|# 4LtUnzOtJ5_zh.|HV.T=aK?|~yms_i{6P&eHwTdqJ&,-o},}&IU64aG5mgVߝ/ )imS De"EE>oTF~6|жҢҒJ&By&l^$w!%pnМVD㝒Ty2%X'm\_+; 5V݂fW1yd͹$E֒b^Uκ` V;y6ÈQ*d|Is][yph$xN "r28}9 i}L 8Lކ䨄Q OHq{[9w9ʭ?|+m*Ê8itV%K^% ZDqrv""&$1rBe + )(:$NBOk !ǂAVPRTid,&XK]UJ3BV iO[̌wl._6gIǔs C$*P"E _*KE@U\2W6 єmz{pj2θQEEGTEX&[hR #b1qGě!.ΖmZ,%*B,zEAf1 F@,;Ex2g EdE6ۂŴ-xxd'Q: MEe 냵)X&lţ?kc(. =&+rVs_|%uW'IxA]Aiji?d4ztTOlٱ1(dZ.u IT Bz9!gLQ-XGmd9ʋgj{ȑ_e62,rKp;da70X|%Gq߯jɲee+IcuSj6XOu))nQPe4FIJg&]\^LEf  :F f}&/Ei:f\?NSgvwo<0'gNENV_Vzy_k#K?aAxD122kB|r1q=r1+ȽE؊E(sƽa 3\A:"!dK-,y4P11(b[ 1*2<VBW!Z+&h/H[Z%^/Z6 FY[λP)yx ڎQնZO~.Qh:FZg#Lq0Nd4F]Ȁ(ŕa1at2Ҷ`uB#jFn43|k'ceaX& ֨S<#0eC Z 3tbyIG7ϟ1AKc-ׯmK !h0c2* cg MS\^z-zݵhu. 'ԃpǸK]Mۗ I7nM)89fy`1sctJ*+Aә T,h+nDpiUgm`BkrmG4c @ԪmЦΏ; h)c!$H^Śn>) u \_y4k,%H(]2[" \KY?\KՂ14ks{Ũ(_Tj更'pJ.R#ҶO5?Ԣ񣳷#>NjI{.<3"j~9` ./OyJb:7jz>>;8~5\>l\Bj\)?"y뻜n6Wuњe5Zr%-)jwmJ~!^|jԾE Z`X/q h g] mv)İ%WcuA X")K[p˩0_o1t>&OƭM-lg Z Z)osbziҔs(_Y`~ϝnKEliFnaAFk{r,As TNLjmHȜ,<&Q1rShCv0f)vDPɡ?rqn8 J$FWf`GU̝͍ؕɶ[pmWP*[o@w?ӎc ښq?fT>O]FIS*I&*% Vd_L92߅]\.d.vF#+\g&yp{ $Vx絖9JĨDVFGlo{vsmrz.jhdC t>iKm1/=ZbuF:N.0K2o;Uppz7n3aWg,Z/MO *@9R)e`ժ42Bo`\6;;Źwd?>5gOsi9n{Kyr#{).WbQ(y!Je<9Ą  t/RgLQ+LD&;fx%ʧϠ탩u@\.0a[IbyՓK(hENRCMsFT.BnTLC9@]~%|[ݝWRҎ:zfz9r7zK`!'sȐLK-`B1;TܙDu&se R;>%.RY8AP@!:{#uŋY=,ve#Q$4GbR C{A<39hMHॡ|`Yi㕲;i B4h$U `wIPO1G:`t,5AsAL ҙ8U WQeÌ\.d@MvJ7d;rZG&]R..|+6phyxL@߼TbJ(E˒H)yM8LS"oi1 BK}ݿNܿW&ѳĠg%hYNk- /2#<+0E`u8,y8C%:gz?xԓ)4?{/|1&/_BG黋LC1C?qku8~b'׷?>NZǶ+㤄qtYH]g}:~`{Cd:P7!&_D:JCkmg /u:X WmR:2;Tef?hZ>*Ѐ+fG;E܇Jِ+HLU~:,׽]MMoxv(~R2Ft!, x)`D q%,u`2m9s_~} J3`Tށ+zdLRm'ݨ(6o9&d׶"!KzO)TС߄ن- n$w  Z0Ewe'5Izez."A"˲&3Oؑd#Җ7jN'e9ԱsL+ 2i];]s#9gՋb;b*Avp E*C$IJŘzg4hS9ϡ݆߱.H))AbteCOetIƨddTs2%F![N?"g:_t`A΁|:LHMImBNؒ,2b&2'g)#)7WvKʍjW7:udLj4s:%ܩ%!CY̫`, H@"^vHv'<j8p+#ԇ)hv;@6,yp+BR&`UHX) "0Ŝ"|@018s|mIf L֙g:xDX5b\ebOAa@caM{~Xo]EaAD#a,KBt).8X*(311&0 ZxԏmWMY5IXeB%ɕWف-(IhUFt֞@,1IGifL=je]gjVZw2а\~P"-:]t )eF 7Dӿܵ1 9dT_4`1heѢ-(^r'zEV6asZpRe@,[IZiO W >6JqNYfZu{壓4H}O#w&>IqW"G|0\<&,?rA>FEg4C~^51Lkؾn'z? >TiC2%3-$Pj;%(p;; {~:Mxr7='#xNjw%hf}`,O juiZ37^Ҁ a*9::,(΋~0Jt98!iFFSs쟟/ȪhE̠qZQ߾5'eIL Z+i I+W9ʾ\`ּcmvM}qssv[Neœ߇|2<:] T|q^Wy}G=IiI=u#ڻYRÛ,Ow1Ƶ,XVr>ӫ>nDk&zm핒LKoHXxʥ/dDg 7X}Q3Kl9 GӃ0<;O|S᫟r_wo^Ѫ'hakE1X1y]uMͻNѵ]u3 ty-P~>5?{WFr@_xF޵F>ĆЧŘ"$eY俿3G1 WOm-&W^ܓ~]2*&ЕugzĆi~Q*vG|R-T*w ClH_]u_cܻFZDkns'Np҈/rcVA#%,$.Di1p&lNzhCbN=O-4+P]J3T(VX`;bNfɣid_gWM1o6GΟ#gVi "jCw't+=(Ѡ*™S^K\qz@L)np09sH[G‰uXTK c^}=Wiqeyv/,d8dXɊ{xJwf/M5 q`$ B(@3f4j|9#,RFXJҸ~:(20p4v?,\|mzd|v}K&ldȅwo/CKpݯ4I瀿;KO1 |ŘFwv}([ jL0Q0tH!r|ol+X=Ӝ*8q! Kp>~>wlg q\{r0Eq;C-خ(^P)H+/s:0j vCyO4ؐ"F(Q@G/u2Qqo͖tIAC zX[E=o)Cs9aúDwG 1bjo `qrxSMgױ31(eH8ӊs(b$gg}ƙ ;Igҽy'aZ-(!Íph \ 2ė-:51@6ō8`1wa V[ :aFhNPƆqfc|gN>Qn4 >va>n%QO.llPun@E3Ci#UKm oXsh0l'.#ؾyeP&JJIgk0yQH&R)ÔrW9ę-IM@\(6B(@L޲:yt8?[n9Qy@ `z 9a' Ol8zcc!vB V"D 덱dkp\ I\ŵ ݵu-3Kln`U>sV}WWIʒzUWOL]qJ$@`G]%q:uzU]=IuUϦSoP`܄k&4Qp(k;>cK,s'A_.gg_DKY41]qy̍.טx穔A1@qGTVs}(%VSLЃ1DI\%iwCs1m ѓ4DBgtB^o4(./d?˫F8ъ])=p>?J!(^;16`ꤍX6"0DD, &tm6Mˬ < w7}`p9mr~֝T_4MwMS5{{dv{vVӓ/ώ^[VCӇ:h2,kJ}koCC; "[ $FzkA* \rFi(m3n#@jxx|~rx]d ӡw_'tʦbJ Ŕ -V~Y&̗?m2/COMʾd gh"3\13iqnLNGbig7y(M5og0]hPim ^`ERitzäň(r!}eQ4CJJR/Tj y" c0¢DA< n1qtQ3fOS';#D2Um )3 Ù!:Qkn86!IsBvLqLBXHEɥETc&N)&0骖 ۩ODNЦZ)%V5S*%6)d4&j䶚7!x^k>Ǭ5#* 4RB*FXȩuF#ÜHKM`o%E9Gslʔ-{IqWJmMn-wd5 p*܍;p_n&3",rK:7L뜦"ט|;%z8eIװSZ{mF:h1yf a.<⩥ auh4иs2q߳]V]H5F(Rms^0ZJTd6hBPH`lqb,o3y?t&b CR"'*V͌ d ӏ,WDEV\OrD;eO>qZ5v\ i@!TKD(DLYe&Nf'&^qN0㩳V !˔TR&(%ǂ1,! +糿/p/d;?z= Jx?8Wݞ/] DOU?'O0Ela@d?ĉ'zd]$FgYfɝR0-⽹#m(b+ J#}F)u@r L\gYp3 ۻƠ/:ɽ%P0 FVtb E(UX.. B\ ug4B\Oe`|jɤCdJвϫ&+4,-RZ33ިVz jCDx؊s0!ax6Re-itV>kggyuq|rtSa.̷n?˶zֶsjW"qOBu%qQ[K+37 o3?XD?(t`jdMУY݋iR[*AZUԺbgdRj9 "Tj+f|;8Fn⣢z1X_\_ׯw/ޝaN~z :2s׉"sئ ەpW4Q4WM[h)_&*הdϘbQ){{E߿=e*_z͖,Є b>+ 6L)A|R%T*D e/"k/Gl!kE4:4a&O|)#,(ik_$ G KXHI\2x# c@LvJc2ٞ:JЏPmBq/ ,E˸p2K(N#:0r'Fu} _Z0!Zf+1Λ$mUzóНG ݑI|6 Y% B {I'H cneTܽm2NOdEz`& UZ{6KhX3A0ɵ 7\ PϘ8P=tHbam20p4v?snp󅖷! ygcx.k >eoC#޽m뺾/u$ٗ9jG9Æw$]H^xa!TWSQ=Jfx9ЪI lQ^?DV$R1|~#Igq:b9Tj-^f^·PJơIN^Mf;㰵(Vc$[,bͬ-E99Ah7"WiF-d jibNF."a=(bC!^ eB"ސF]!Pʍ"VFudXDckZ Hv 6&] Gm7*J>Vv bx1Z;qj}A_,R{C°.]BG!ռ7иy?)Ѧ32J$iEP ^1xi3w3>LΤ{3^̈0B-4 Gf ε xH|YR#\/; dS(sVA-m0CJf/hlg6&qD/ ޵4؅g}ܗPz"G_X܅-T΍`f1-^}$ E",K&A}$esI:LnCu|oSGAHt΂A0ͭ {.-rj="Ԗ-0SaQK{؁&YzTZlfcN,pDBJT1BIKŠSRz8}:99CV-N GOxQ\r,ւ 0*!F=2#|IQ?{Ƒl ᏻ@ okFKbB2zfH0Ք(e$"9aMO=էb[!}"#ə!H!ojyЃcA[RC)NV1t3tHg1 K"KE-{$6J X^8Ίc;AgcٔJ&D-$q6F ҹ V+Bd" fч@~-8o5G#`gErM1uHf=?R4zUj٨''vG{4_P)$C RG8|l_ONb9-1tH0TFJm<2V#9\.k$Ѿr+"h[D ot},WE >]8/Ã&Іrx=Cxy?V3INF[L}x Xpt?"UV{DؤjT]נ 7OHs)7Nqao'w?qgT #kQTY0 GRY10KӑgGQe>1BJ&fqK%Չ($<7~Y̜H<\I Ȭ>':29c1"QY4my9ZY 5afJ2EUƌ܊ƈvݑwPn{(&jgɟwlu^\D{:QӐ]V~vN1d-l{9ߔ@( YY_29G,2mg4w;gu>m<|3i)غUz;.lͶ7b [V~M{̷K-7t2p)ӷ׼# } WV =-]k/t,O;Om)T_JOPCӯ=JSZAh'Uũi81VJ`+IT6Xi_m /`yy}thJ&p!z<W\HIۤiz$UX/})> GsuurcFdԆs'IVYC;M 1Qo*ocK(E$XHG גW!G0H4y&ƗTʜvM}HK1.'{^},w9sٰܲٷ/@>~#CDa@A,y2o$ىH$">' e;)1h2&$NBOʒ: R&[AQ NJ%ΝJ#c1s#c9RN g싅0 +OsfWk]Ծۜp *&_GçĈmsJ&C ZQJ8' E xXHyׁatφ'ݐ'{$8˰NZ4A'c6 xmJ~.evvqXb j}QFmޣv`7(6 k31³fP#;y4'xQBe?t2ZP_ROt2I !`S*!s3&@@d9 -&A)L DL"[.JY@GjQYg 1sW/7͠tLil?=Эw jټ=K?O̵׫?lRf?#0pY @Na&4*HRWZ&DJ8*$G Bc4 (1>xng> y@t9[@s>JLtX qYǕT( dF[Nq2:"i WSx&yg-ڐ'9/I"!Til ֈ$49K9^rM!OSy2]?~vGK! -3#ˏzdLcSF@>(\$r ABa9%řurQ`]kbvWQ8d2ͨo"EgvCFK%@<p)@a\C[c\kNdZ!#2F纇og&RdH-''LJ:H$*B^DE~ U8=V@OJr>>c Lor Wt?!䬟qF?{?5}q+d~ h %u9cmsW5w8_Yɋ,}mΥyTFmOht`|_\p5}ă<#t_h^E2h_qxMΐ!> :&7M./8^wmb,Jm#1vGoJ뚭M_IN[9R0766Vb[ﺰ7oplXcg*bqƸ8c\d5Hɢs_[gk:¨yG Mƨ01(6kEgdз8̞cFFac&fr(?x3mN\~X@G>m6~2=;y8 ؘeo}s-ե_"'~rqq57/łD㚙lnmbpz:^\ZϢ_mV$k^[j)9 A}3y6`b}1x6=uQ3Ǚ{ywzqDͷ(q8JYcdss+|ܪy{NV>d=ZE~.6ؐm_Q.X1[cZ]In·BwޣP塮~ S>? .-j 8r:HK]$or[A;aWd&3խtAnv,1?7NٕlD~Hlxn'!KxIYҎ`5H#RT Ȗ,rFp*LQ`! E"b#K:/PE?;b!F d1ēΊhY *J\O5z+z}:5c(/]Ľ,/9 AQ50ʤ+1v*B jxѠsQfY,C>jv~Nb)E *0g NJ2Ц5sYg\.& έMurF ^7_o=KMM^jiwyr8CSepOz$)1G$}b#u&R-g`t,J[E2 ܓo +8Sh$H9H8t!%&D#ӗNjF1&^e#!ĆTzI!48 JWNV9EɠWH|ƫԹ ʹjĞɊ 2ɈRTwÏa+ʻO]qV2(v`D7o8#o&R.Y!7q JXFLk^%vֱ:wfNzVVwjYAX > PB3 y} } G*"@`J q!(\X+p*)cP IQG'ExufvBp#e%{c!+wv,_ y&ZSu ЭUC:5DTNskP>&M)* kd㾕.t;ҍl!ĥ3r y)0s ?h.Jc %Ʉ0\ROrzf{4[i+Yv_0nȴ?مޗ?NZ3 brS1íAf$2$L5kPJԃ"9P*1Phk%NR0x-:Uv/Y__ iUQ $yXwhH=0gb{3{͕t_VrBWKe0ej=s7\g2ֳҘ~+F%\m5E<}sp>y]8\7\gprX͡IW}` `EġWAO/GTWԀws}TUT:r M%lt6XO/a=Lg+y /1L3ɵ{`$A{Kn]@/+.dU6xkٗp6~zV_W W; WzݛpUb_Zkh`^` 暽!B \P׳tÕ>\=ΚƲGեi_7m~\FW:۩8Y A [['}K{+B5:.VDȣr)U] zfROJ ߼GZѺHW.QmC0# L9P)Hְ\my=rBwMSVO }<ĿyWR;͟ѕ0^!aJQ5.-̃Oʭ&?6o,_3a۫v |3^N "{^dϋy=/E"{^Ȟ"{^dϋy=/EX-9a{_ڛl.ߛl}yArd׽:-5KC3-]Jҥti(]Jҥti(]J c6ti(ti(]Jҥti(]^Q *]Jҥti(]Jҥti(]Jҥti(] ܘY2sfلb 'gϯgm})r%sScP2lЁ % L!>; {D!y/(d"+d;)BYaeVdCdRFl~) 0K3d Tk8.P፧9EHu` yvft9fr gbOB~W XyYp'عy.$dLxd;b9l^yG9V5(Gkg-H~b"әAކ;[Ȗ!c' 0ƂCDM +IҜ(1d`AY0HR|ljO D'\fl.>6\@I&zd :ٕ9N~3E3Wu%AV:I|:Q!TBrA5Ewd~uvf f(~POB*:IOf9BmjN[Eǜf*ܡ~DpM$iM`Έ_VnW}z> 1DK1J$ ^*b햜@}"X|VZ}׏YrdPI>S:$2YOӿvMbLHjǢS[0F2E4H"lQl#[!aژ`l4*#O0B$HwR);Bdb i4-z}RN-+ˢ .:%y1#:O5JLƀ>F-eQ{Y'Lfmd3%#SJB0^!`!i2=4Zt(uv`ܺ?IV٬/wLd N}M0~ŅQ$I$:dIhb`!䄹d]ѧ[Q7z$3<" 32n4%aySR4.:b9ld8R"V&h!%3i 1v+. FA~,Tė+R3!I-XXszyn ~~kiWL9;m)Һ 8SKSIq'7F'eg] a=?&GKB y1o\ 9MM8Ϣ8nӻІ*[ &Cc\]&7D"G5Fx!'#ٓZqKH:AfP-\==MooO̓Pl i{o۹ N79NN`zs j$!WbHasO!M;b^ɇϓ=yvqv}ŵJKGeQ'jUך9(ƉN 7#_ʙVöVk~T}ϏJ?v6hW_~o{S~tu02'ϧ0wϣ=fq\#V˫5Lvy+]>t> =ӵq ?T?v={ͮGlm9x v0 n~ꀟ*?ƪܶIVЄb3 Fu`9ƽiMœV_āAI)+־@Y$T{AZMIR"(E$Ս|cҡ筰\vxۧYl)oh/eyEosoIOs85a^}3_Axj]m[Ss &Oo7<pkb8ͮ&S?~8>s'LhՒ~`C:ih\ṷ2,ogzˑ_iiR5`wgM!gִ,ytdw{)],uLAd)UŪf"T̅rK_tgf/` '] m؀ a y5['b =k(Z[D C"k#*/FOL=I[V}'PC4ϣpzX1;dpDp&h=Px0luRl8YCr1I/UDw=d٧T 㫮844~;sdXNN;˽j*ҋDÌ)`K1'c[4% Ǻ0Dɤ5I@v!l%WɋC$!1B*i-p*1̆ʂZ.JB3%4MNzi5>9zc_/={}a=/~8qT> %CBL%rY$ǒ9#$„sz8sV^WEFmM1#rR[䂠'R5PD͹([QsNNR9TNTjq,X{,|Q,|p3)f_n0'ޠh7eĶ%P1E9g7#-)PH hsb &2 nZieb(HMB!EM84d1Rfw jq2k&橠v58eeԖ=jh /"l6EMH/iصbFTVlhI3@VtĂIؠIqL%++BKMxXx%mtSSAjq,"BeD{Dܙ$:^Ax, f@үM4ɓR {KYΘ6@1 рg0btdIU/HoJjR*#b5r#DK2r|ոX\ԕqQ./ E 0!8bn" \p`bтJbH+x*xX;NL-V1MSaW~p/yd5j!a6,\ɓ .$UREefN-9#i.j!Ձvc^Qͤ!r5pJ*4H9̠ ږ<Ҡ39)%Y\ F$pk#9iSYVyX. CI$z$q2:G3gܕ/o5Ner/cmjrtaG'hIZPkʎ T HM^)DkP*'ӒS0 R\!"E@n`7XJOJkQ3]qWhcIp~mkxvSXhTm0aagMkVX<$ȭFo+Zj鷶>vn:cJ: *E55nkJ`\fN7ܐ|A^6Uꏫײz-kz~oLZl{M4-@״=?;hWD X:44(+Wi/~r>3thzK%yKsP?UV@Z1f]NF*@"P)sB)!O n<e )Kn1&e4NЙ t 1(9}hwݦtJqq(K qKVEzl;4e wƦo%}}d*p$10p0+2tO6&βGy3#‹ BSKX .', K/0$MdxRRAEU`!F0)cq_hS Cn+HD ]wt 9 \#7[.@ײ-i 퓋:)Ҿ"Y<`ǠƷ[3me8oxIpY'EC8hF+]vT0*!S#Ed{<2гzz֐'cPfIadr9XFXY ;YXコ{L\[KmTsLe& !4=?Ə_Sm\i`eG~'O+g4q'WC;Ҏr>)Nldzqq8/m.Q?y闓O@?]$#Y\]݌mvݘwLaʸ2h$?qtӖp=nOxbH;{<0w1e3 4&| EQV"\; yklrl`̯OheeH3zo8.{ f?/zލC [4d<(S{M1ظL`Ul)Gvy;笴|;F%QL'e#"-\Ŵ\?7Q/bp շiעIY~~NQ_]z$rvmfqCҏcg4wLl >~zs9z!<O*l ].h˟Ef YM}{D|B;vddyeI5j6Ʀ TcbZR5/_&9<̴wj6nˍt 6/LҢ2U Z{:60]Xv&v^#AKw]O杺5gC!w=Zr$l3o.DШ"JĤRބ̒erˌI&X3p{8Zo/j]xN wG-bJ zE9`Gt,zzsY# !؃^+EsRLQ zYtʑZQ/j跙%&\l&T:nh'__קfrUmqRVl%"Oo{$)3#Ic`"NK74/ ΀b+ "\x嬐Xx쯿b!V\y_2 pNdeoeOvETۯ&>hsnݥR¹:L.j|7{ږu}+맟WM|g%ozmcl} '~ɝZWS$xOXG2bM[Ni&RlTHs;e."}ecmtVvqM\||eDH껡G38C!_*[Ocmgr6!>s*O[29,|ˢI)E73['ZynՓݥ~e\:cٱmRְwwʡFPBBxƾi+nYU:08?/#as&/{@? ǃO{' Ks+z2wM<=ˁ;r{'[a#Os-{-%}dl}Hfz9i[lߧO]掱M>!x< Wc0/H[777Z sYr:L?,7Q, 2EI (6F$odc8$1DDķ_ڭj`Uub d{(|6#hrJX\h E3FRe"PoUfk^ab;g>wzuݗfKW"G[{ZoF߼ O?|~y1}#SNS3AtANwqxleLg;nd죥[!Е& 4e{XvkOlhʖET7Î!=&̸L#MにHN×MIIu .ǛlgO3IAWs1~`ċ7u ؿ|Nh{a J.WV/Zd +w `vpZ1zt2G  $ q)P?wpN^v˫DZ.Q[nX5]dmĜ̾d~vZ̖w73= &|Mb1 h( {O`ww°u6#h݆C$lVɍH@qg/V+zury٥[[GnUMwÀr7`V͖/ !WO|ka)8O5}&vMoܰ@۵+DOr6.ב+]qe{v>Fv \Fwe}"veb7Z}ܪ1 =N0ӯۋN&L) Űϒ >EZԉ'L/~50g5jJ|J"H:.zv;arS ߩnhB'\7N-l;-n*Q5tE2jVȡUAF:DBnVB]yytUоS骠\y6Еd3Q]uuU 7t*(3SEtU;[ ]w5sW@, CG:@DZDWl \jVUAHWCWp"`i\5tZ誠vtHWHWc"*e=`*h骠tT!ҕW]U*pM5ꪠuftHW]6r1Ҫ=UW:jV=UgvHsWզt8wdZ*gL!z5v;n{4n#yyMwoW?}d#uھgq_<Uן=3ק/?lI+ /&F^VAg+r8 ]#`Ť ܽ5vF(G5|t%U6$cjbCrPFXt%%gVDWLWZ誠5ntUPQ]"])4V Q-tOW!ҕ'W{dp?K=UA骠 N"*+RWfIA(3#] ]YePSE \Q͓ ~t|+I]*pU5`Ak (ՠ*ht]цCOLCY`'ˆ=U7^ $R ]HW[Qfe"յ1N6p |ZzctM ձuT*|ewtmIu8vpֲVQ|tq샢&LH{BtlSN˵onn:Y;Z(pejc }j\yv7N-7RM31Uk]-tOW%#] ] wjJCç+Vtut%-FTDW,% \]M2Xк?gJEf+e"*m=sW딩 Z'NWJF4ҕt= \k+/tUP#j3{6 Z7+4z:DN2`IAC+ԌtuteBWDW',p]5 h骠TCW|áeyZ)K ]Iѕ$*h:]F:@R$$*&kUA)ƹC+Mc폮xZMWW#] ]NR WS5ꪠ葮, a/N5gLJHW@W_>z˘(eͻrdvΫɯgg`Yƣimy+O|?7I<0\i- jvҿڳe3[JWJ:΁?wJ;^e>FīD[{~z]5ey"Tyy۴)K&f=k|<}|;[ξ[ģ?50+rwͮ;NSn=f#^?sx".X򣥲r@}*J =wEaa;[4#3 >h*G^@=.?ʺ@o4巟]ƫ[y̦]>7+}\\F/WЗ"yEI-XU' C֚r~w;qL ؇[#|)^\5o3M0f$볷TrZdJ-\QHdv PTgBrk B0x&M~ֆ1*%|jkhVa9)a-#g]9䖤}ηWg>MfXG%&j2 k`,Wچ ]Y_c9Ę(Zys-*^ɰy(E9Jfp5ZGsFz>0:R z;lǓBzz8:?Gkc@:Ʃ>2"X !$$e`PY̕3uH" 9Z'$zQ(dxK}(9kxDžF"1xL0VĴOk2Ht@=KgB ޘ]Rqہa$μA 8 Ec"Z%`>J[Snr2͠C5vZ*.W3Pːʃ*t*]Y RAô[ Nǰ,AQ\Z oJ rX'#yП!he,e#`2JڬF4L C *GX@*DsV 7Ǹ,l~oUQI0aJ"d2ɝuDZ6LIO +𑵄 Jd3#OPM"A59*83&kVEv J E(IfH57]6ǐ?g(c)Hx,(! V@HpP9X&T DIRE"ykcA ^ aeKd2K+ LI+6Y"[V4dAd! !uIVr Ƚei*3(Ơ(@pseĸ`g%3 o+Ex@ |![ (ة HH ͮt@(6ٵԥUݽ.RSIWX\Lt~7Xe1F偶S ^\D,/ Cu+Sx$ބ"36(mKdQNTDU;͊<-P9%]VB0NF+< /_nt@O>/@IPZd]L Be|͋_`)/'bn0Mjd8XoNau@k/-n.BFe1ڸld1Xak dL՗2 ۡC~}կ|=W6 ZqNғ$@B/?׃@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNo ⚻>9$GpH=RGz@I:e%'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rt@+/j?N Qb @@ tN c DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:]'V>9w ?N @@ tN o4"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rH_Zo/?MV~7TJjx ķ}2.1.q {ƥhEƥoǸ$8bj?=3]祫Њ碫PJu\t%+It衇j1ql9y8}i'y5gUE1`:A_܊wWcMe(ͻ@]/71z4n5Q ɻW) _ayN0.qJ%oˢ{h?k Y(ٮ.y& n0utybf z~1Ө}&jv!J)UOr_=i84O=FRNo0vWjNB)[Fb3ʺ-wک|u]mRh{Rn[*rJێwTu>)zKxiװmgyZD$˝Zu9t uZۧosUj2핶L_]Lj~Z].֮:þj߿33 fЌA#&֟6Ɔ 9i~t9SWK YZhp-BC/-%ZZ8,}+?+ r}ֺc;+əGt%9?t%\VqJqtjZ7tjhӕf{tJ ] EWL_誡uJ Δ!:A2BzzDWF(۾jvkW?Nhѕm]57V ӡ+'}Z`gCW }+׾N;]5<HW^BZi#ן+7t_j(-o^1%W] ls?s?j?<]Ҳ#Ƞ+EtwFmc> j=挵mٹ){D5ʴCt8C4P:fOQSZ!zDW w+uZJNn_#Zc{CW }ֈcr.3Е^XZ\zCW TNW aDW'HW'u ]5JZ{bvuteR`۟2ƲͿʭ-DWCW2iW(WEW =]&:IrV*zDW ]5Jp~tPJS+TT]0 n]5׮JKkW]^>i|S~⨟n_cJc~/f5=ߧ [eA#?e.yd>>mTС#:Tom 0%^px{♬M{tV[~[uD@]2!\ 9C۟~|^[vՈepu[V'.Ђn5@7Lgz}~SBw\2,ͅ޼כ7o+WLB S6QN8gn}^LqyB au̫oyT՛ e连BY5_]a<],O+{pZ4_e;ԸlK듫>D>b?lzvU6LJ?=aN,w&VMݵ:c" BOEQAx[*UrUfrJ$'3 2)>3m\VNVx"8)'g7<\ (/W3evC-#zˏ^M&Ș/?{- t|Ovk/ yeboۚ_;ε>9߻pXy:/P6?55F*!1vrPu\o?LƟ bGnUB$5fCF\KV6z炗$t*gmGAv(P¹[w,o7]otա*J:)TsΈ"A*&w+ʿNoǭ/[zPcS;%}Eeh,bi0ٛ%g5qd AWZ@2XLhkQ.RYt|RU brk"ٳ\@ %MrʣrC2zfjŮrvG /2ӷ6~ʊII%1#kgJU&8iK,-&t舩 0](%ls]J\.;tBx ξd9]/M& CŹM PS>pUN댠\0 jN寘,Sțb6S_c_c(DP2e2 %!X dV& -eʿѢZ f'Nfލ&港 }\WD `~jYu]˺%=m̚mn-w.I3ugha$ swv׳?~X+U+Yx3S%.J:Au w,@>x=j%1zܓ[v] CR#$,!:+7 "^J QM:)+!\4iUUsJJKatmZg{ȑW6\ffb2v60Hik#K$|+޶ܒ IYdWU,Va"{>1Mcl48!SC=Mǧ>=jw< |{\9/\G'uBiZXgc 7WM3{ĠЂEɭ6#S5.VldSU E_lD߶ 7eoݛqQɠiT뫫4Ct < UAT,# Z9V=UcmأY51p)Dc}`Z& &$T GFR).T>JȲRd\3d "a2,B01L榁QclF)!=RNiīqTOwW;GW#a@Jӛ` }^h|)yBIeʗW4_=)*tΕiQ[s{w;ԍ=v7, *s%%G( Y|5QKVxB/Z $ ]hSnGYAemT[~[OlG 1_b\51kK;'h3j.7q''kg]K MI3=Z-<Δ3>qr8SΔ'33h!+U+zŘ)B| \e̥ř.{dUttAH¶. xP"ڃN P0l8sO&]ofoi>+]y9(}JG?!PPu@U@3«_PJ"sbPrRPjrhBnP~y(iZ.Rh֌mjʞB^SD§Yeb8MíTE7=ùrP@wȘDY׉Ѓ\ cXV`p5 YdF| vҸG,2#-g`nHflBN9ػ`AOsH8; AvX&$%0&$7jT0!+$NmӢro:L5ߺ2 Z0Ǭ6[J(d5y0gcf ؉'m5fOr^Ԟ^a܁[g^JRdXUAPDWyp"cYɈm/%u;Թo $cܑfCKĘdVS*LR/Tך  i :4ڴMlxܷ17<_ZVSҦyNDH6F,KChF.3fEǔֈƐJm-vRcF'a9!3l)Qɓ $-UZ4`ϕljc.S?M},O .OV3(AoOдfh!ڢNX*P {Ҷe#dQ-L\B ٣aBݔ [_ K$l+H >iqh)GCpb-6iO6c u*etjZiՒ27 Ͽ{{p,L_n'.aĻmDOewwuViHt#O}?Q q~W|ο&?fOjK]&GIRI:dEU͠KJ}F% θ%c8e.8agFKewn|HD+jMwGi4gJóS+r㩩3 71Lni-$ /P0:+sLvS:ySjepNe\?^^MV.9HE޿|sغ^;G^7X?̒>fYޒ bjE+XV~~УŘxX;`GMrݨZ]ɨCH3)+b?t};";|xg0@՝jf7t'>۟<~ۋӏ.^ {\5:H:y6 >7<iC䬧 }\'tS^3S8[k]?^hr&9R_zÖŮ91cg_!I̯㴋??J8TkU% Xmr 9a%FiM|$zۍoN$kocD.6dܐ!)!gsBCk'0SyU-=Np5q4>EzՍG/Uh<""\""҆Nf2-Diu9ͩi'N gnl9:yª6teBw}~4ԤF^ة9}ʪ@r%ח *ϫ!W BAlF9ׇuMzo.]w|չ;ڔޭW.gWqzS Kdepi󸫇Y\Z>NoPv]~gLyG{u:ߦEu}VOLShʱ lH݁dܸbz0kU&YW+9L~XL~ k,ϫdY`9K/! lɱ=]#;-{!H POK܍y=ʼneWMlˠ=VI)CE[\NdmF`ZxUUFx Kz掠Gc.1y!C,1[Nn0 lh^3@o }hzɡvͶEZ/6o{|W5OMTpMۊh*h*Bp^Ȱ-n79;~`O^4ttY ѱ(m4$R!([V&Pxm)od("q^[gp*EYI`VfN/.*Pa(^oUQRϝTJ783UOoo.ӅD+fXR*8+ Y>u>^>,*:LһtYЧA[XhD` )%h^v#1\'\$9(쉬yhnt>ڭΨr=E(yx{;r3?  ͞n>XZy'[BOFWMRW8miby91OwKm6,1D`-qZoSPn~B6 {;µdkKǂϿnf<ĵXPD:ˈ %w&:9@"YbY/'׍jFmMF!rG[YrA'R<`vAOhs.` 5'\DJi9;Ti 4 }uhXVTo 2>УQFҠ=~#kl[.O Icx0ZRFbk!F̂m2-C1< IAHQԦdQF84l;f]8c,۰n; dbn;ڲa-["؍)w!^Jl6&Q$ƗZIBgX2[!ՇV*d!2@VtɐĢIب 8Jwga}9amԗt\n)+Fl?ՈаFV#qcx`X73fa$W&4V`kDjDg9cƄ=$wY.рE BIs#%^ޔ⧄Rֈ]#]%zՁhtJƸd_֋Ջ^񅡽HQ3s`AA:.D !*HZxx*18}gʺûc*qS\RяFEiEY̋I8<[2ko?,jR;K;<*,"vӘ䢦T ޵ٿBOX`,l OY㢟2})J%Q؀-U5VgmdOgŸuAB[_ 5?2aȾD|! <˭X&Gyy%'|Cy9P V0EPEbTzSh'vҽ]Cz dTKCg6;]2+HxH]rdT/-#qBj۞x#62)!L<(q$c#/\QcuHC @]am?by '-R= ߪCK(d=9 }@\bVO8 h]FO>;S"}놣tTRSUI&j1 -'c"JḍL)s(Y=^`_ͭ!AFAP x pFF[o?-sܗ꙳hkaQje>6 ~߯帧=ߗYB 1JU\$w!e_d 0 0+ =|;z; ͨ~ ݃>YC|>}>NBxnKQ+{aLt䂁t)KI+M K}㇙<'yVs`=Ўx\yM?uo#}wMTr!٠T#J7{dy­V`Tj,9 n8%gKl({vYa,: T[;F%JMz1 8DY zI4ڤuFׁ.wz7f_c0]vQsVVr'bÔr7>)&i||jF]́ڑ/ķҥÑeQeQ^j&MP5%)Uh/rj[K}yςzG%%xFy/F,a:$oe. -s܏W7q_}8Etaƫ/X+.>(1hx60= O>h٥\iy* {aC+:1g.(DCs1eϣ!f XXB~ 1s41%esa'I@!Y+Fh/Pk@]v&<# 02 9 %:GM>~|G|vN{j-w/CҾt_Ȥ_7gY@qa!FuH,X HWP@Q[es`:Pm&x!4RF1O Q;W,Gqu %:\Nڔ!GbUQ9 (GyH]4CS遙~GmCX\0вO& 1/e("[\@'Q7]b>WCq e)P[)E}[vi0I_IAW.8qtYg!sއep J+Ӆ,2#oO/ kV^yfH4`m'3Ӹ>L}~QseYewk,o+ n~r}2>?t1Mϖ?B<ٷi{z5 ե4>2v/?; p¤M6@,hҜIa9svo;?"0}|޴IX!}k<"҉uH"_OO g5|rS6䏝POߧGd64BјLz= ۯ[\9Ziz;>EP1uޝr֢QY{6뭢[q;HŘz|,3]q~7&dlTlMa>0e~l龉2 vytc/"39A1?y6"Ry*Ǣ/{~FI5d9uohɵӐt#7f=C0Gups7 k.7׎ {Ԧaa F=WeItKy?EKwB%;z֜ =zs˓ŠޓޅLQsQl*Z̤C=4 dPL:Rnr̟,jI~p3g)glP/D%`*Eb΢R- #Bzc9oxQ4R/3zuPQ/͕{esrqûUչvV{<{Ct`F-_4FJ%O6Z>X,$d)-wuuV7$a/#r*{ N];9S[F@+(yE 4BvjZ9ZR˱6`Pj!Z+B)HWGHW!V"BWV\X;ЕcJCWHS ]\k+BkOWڑYLEt+}5hWvߎ QZ6o3ޜ0kxWVvpj+DWۡlAWf/Zү=\mm&}Bm}R?=?$} ż{)&L+'of14ݼ>.'΄9S҂;|iϛSI]VM`˪iF4\4}<4nWXEZ%5"%ؑں +l-tEh c+%X[])$Кtu 8P¸qewY AFAP&bᐍp84ngBHYe7&>m.ȳj< Z< Vڡ{嚨,PLR$lhݾj#+QW W׾rtE(Ji+5ߪcC 7b#+MOk!P Znk+BPj&6FUCWױZ WtE('X&9&ZBW֚>ҕ蚴+l.F"btE(]#]ZԤ]!`1 wEh}Wҍ o3޵at .nZyJۡ4 u[ЕK@[j0?o$q&'Yprc4͝PVD9S M#\pNrΆNӈR $HӯB5iX[Y ]\j+Dk:]JaG:BI_pjt%Eh#])PLȊ]!\Ϊ1 C+BHWGHWwEm=uB1xteϚ+wG""vvBFW1ҕcTBVjj8P hVvE TCW׉Z &NWr4]3N;޸r ayX"Y ^%^2^-G$˖,kn[GqNZdxxOX$~X2~,".}ZRV*nzЇ@"Ӝϲam\ Oߵ)nO$Eq*lV}ɢ{]Abrբ0 ma:q; Xp1xuf)~j*vaى zfiaZPpK wH>WlN!י=ue*5n:A\[Lh\ >΂6qV W'+A6a"\A𷨮!׺Yp5ԆCnיOWLjkMWC-jqYi}yv qwLwWC W'AL8ϕAu4RP˫_*pu4"WCpip54KP:nJpuFy"\A0y֮8;+׎+\yГyaĭE=Fer]-S+8,T8"2k7\=i裑ju?3BB {8}n&gXD`CY0 1M 'iD"{j=nju*Ã6NWm3 8,jQF)[Į%alw.,>u\Gix[W"H JlJ'}$fw _{ Ƿ480=}2g4{:顒t bjT3`UnW9z];ʰqz4&Ypǵ *>3eJb  S BӸ+z\Aepu2 <r|@2rtv *<=m`? ZY=J6\ tl"\Q{-~erew5F];RhsW+aWCpr5΂+ W'+*';6jU; M W'+/xzq̓!n?~7Rd *cj>ner5̂++C%m:E\h+ͳlip5C W'lЙp0iڡ6ڇʰ+~ \I~Wpd\-GWr!pL% WWC<=ж>bk?OjYÔW}?t]cm{pE)]X=xFh.(nl4xْ7wKaWFX}q؆2_Ȫ3!Rn%NjU,6'!uc^=?+4:=aww>/ۗ]v}}uy^ut߮! : va1{?Q 71=!\ [aO|9dk^~IH[~ Ǽ3B/Zy&6tFzA='Qb{MN XH~52ϲ5:UuZ[q"\ rYp5ڸv\ 7\"1+xV"d<\ipkC W_d_NNƏj-*לݾ-~./^x}]P.v_bO huؽjD->e|U5_{nw?CBe؈_ Rcތvo^q50vbyo+K};+۱to<7|2&}:{wwQ{ 3g9K?!?og3Y7o>.,WG@QżH>r2{?[u/>o>ϟ#w3S~e0?} :C~>d[cy,&\tk釫'ya!{|_n^%E-YѮ%1[WQ|F=DPbTx[}7w,~&pw+^ nUz?~v@]]~[?ȕmOT}ҴLɱAKQ2\FךC9e!mlj́Ѕ IՌ'8BʹRv +IT;}M2|/JhTs),; pL!V ڜb(U!ՆRkPFRlb8ׂ$Ȣ5N+S]![W-V9F\޼-,վXݹReCjT]drM--Ht$5cz-FK_Pd0vk=!1+>o8{ptBR? _hB0Ko/Tc*Ε qٰXF4ي06Yʹc@&EB4 4fT{K!ɸ`)F[j(qxF\4Q7T/l4h*dcOQAɘcP P$}|<7J1 rok*s!Dm:G5{@G*3$|1a:z7E"b'Qj,I[t%iGZGH }m-f>' H/&SQZ:;XsՐRDT)X$Cff|m@`j4U|=˥!)XӍo&hY=9XAr"S+ڳd* S#=ԆR3u4!kR@)M(VU'> HyH2r30ؽZ[!xy;"2LʨۃAj 9Tvm-Aԙꃋ_ r=/wbF\jX1oT *[0eX;L'"B`B._GQd~.E  ʃVs$rAV2E0#b%`ˁPh rHݗmeUJ wet<o 14|Kby >(a=p4EmU;g lZ.kE$Sc5qݮO"Of iOeLl3fXbM=j"]RH9m sP6D2ıb֎F؞>n+la4(Xt,2m!%T$cw E!`zb iq% O Ӧ30{˞;BPj1`@d,Gfs/"t L1c4,?=(FĊGE$يUK_vw6L$1#,)خ(r)> bigѝmxdFaK@*PRtr[5 UDW,e促  k W+R!̌ CFjh|/6`G< ܼ>oߦeڛWo=sLwV`]@wX7ם 4cxd6& ¾P v2%;Y:Z5;&d{I(!e56#w) =~h|AmFJ:pPaRDlwXv5\Qr#bEtNJdtL:XA L`ŖgR+pz/uF1}qB?wߡy"x(X; fa[ :.j(a50èURW(eQAmoGǿ 7]T.]8dáL:"ww!%ђ划KS QѿSF ~@(`)Rz)(U}LD zϐ3Nm3O0>圐B(7jjSgj<o= f6R&_g3ixG$ @)j]CD$O6McPy>II[@3l3K@O Xi k(H l63eO_Ssq6~ ǻK7[5}ʶs}Q۲.֣v }K ;9=ͱ&Wͦco s%)qQһPKwuA:(A@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N:xKrK1%9N 1x1N (S'@?R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u (@('P11N Rx'u 4H'XN uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R'u䢑ebVz# Djc. N@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N8.N?]ZG'oԔK.ovonj~T//#L T/9K2.a*Ǹ] Ƹ=,޸ƥ!nU}06x/W$X )"֚q*WU4>&I= W+V|bQ"I.%A"\\Rpj-W2:q[ \`$шɮX-d^7"Ђ5pł$W6q*1*^ UO \"8{U7g\uS =᪛JewW\=7byhۯ(fǣow֙kL6%<[mב w)FoF=s5c0cNyŸK&\hjgWU'/W$9̝+bpb^!*; U棳 \0Rpj}Xe }+AVRb] \>Ϯ: NﱫnrcnjsO|uRMaU耫zpէ;t>;oGb6]%oe@/SzP$[ u\PҥOnvʾZ7?S_}TmYyVogۓ LM.Yzu]wMmo3tQI^EaTOF3|x>PkF~?cߵ>/jf#]ǏvΫݝtی_h[ŷz_yt~^`5 Sn|L&Ti9fCfJէ_W) v#;^g)z:l\*]}U&A>Jvx!^,Dpn*V~l^98j}:f}NZ!|NϝW)Sub]W-61Lf zHj,wLF6wjHmm;qug>fQy}1oo w*7l+w*yX[WM_!^#˯Sjc;ݟ.856R_uHχw:tmY8䛧uB}Z˿\[>ڪwd̸w;qYn_/ ѶCۣh9Msɺ~~w!=X>3$>e9վCQ*[V3F}QvU쨢ᾨBkCȾ{T]?IT*M >PTW{mTO_~DHk^l$#t'y_D>>LFI0]H9˵FH9urV #)G||W+ߟ1V#Mn. ex(w_Nl{w p.O@>(Kݓ%EV"T,iAEJxO '8o͓.S4-VjqOH\qh@..QZD{e>usuO{jL -%|%/,fFSb^VK̗ gt`iQ플~ܝ>O \` W, ͩt\d)P\ W1GI4IprprdW+VjJ)dącd}$Mm_?wSYq5@\唳Opł1E#W?*KNq, płS+Ō]ZgϮXաx`SÓr ƾ-Ɲ&njsO3TfSU쀫zpc9}GcgO@^n*tv(,L\^ Y1mۍuĴCIV X.D)b1+V! =@X0z1"ވɮX+>bWU0&$W$8Y9A\Z_|v*wP\ WT \\$3HjS_[tS "G&+W=W2۠ 1KʮHpr:,YGբ-Wr ?W$\\3vjCٳXq%WOƦd?zU7쪛Zi5'S\%Ճ*;᧠w*WHӁ%!Tɘ|OQDl/~qIfgi CZ7Wоh{\RyVogc]0Jߩf*Nf"k9Vi"CMSܳMaKA@ct^n, hh'#fhb2@j=҇Xe:0s+]+ bpjC#-Wς+] \l墓+Rmt^q5@\y%uI3^ X3j-W2q :QbX1bA XmrU核 7XG`k+ku*(\ZW~gU] W۝.(5^~\p8)6~k? {y~Iq+ O~:ZP󬫟NFx8/1)US{u2֍itj6qF# Wh{C|BW'-]_5tٚQr>uԮI|~1ҧ&n3r7~ ;vZǭekmחma+=gޢ_JX3̌>VFrD)Tz*&kCL2/ȂpE z,\\3}60V  0 +rpErъɮX/W2:ՋU> c7c6EtSYflUv⎧ZF@.S^nFI}WnriVtLʜĴD+W$,W,\ھP2Zq1u좜>(\Zpt\ҦWς+3$W>}Ԣ/W>j K Z1"ֈ;2qS6pEmd(f:K 5mrpܐpʝ#WUQRg0Q(ffX|vE*Sұ! 3+tYnRpjx\QcW/Wx`c{Ns*@B s* L*ﰭn%,4He0Vq5@\Ek|pEsbprբ)WZ p, \3j}*W2(+ދ4H6] W6?*Qq5D\ j ^ Q Xm.3*uʙêޙS޳cC~gu\uTi•3jsAU +ryl[&L $ʭ,Pn] ~3D,7)-Ӭbzv`płQNVIr)bOx\Ww  dz+V\bWσ+j;W,8g1":}jbYq5D\#W,8\ܞԂIUڤ l%͈'?qr*zo/WtFtJ>=++\lt\J*$\ϓ(զX:X%B! ?{ƭO ർRuII^SJy3IvocHI#R 8:x~n|wep \B+Vx`Pʸnz| uZ>8]eЊZ,t#]XRh}OJp(YŔwI}K74&Kv17;mEdNE%xiZ0%Gk@9CLrF@m/7cւ|,>;hšΙPpkqH;p@tp ¡NW%D S!+=2+HWGHWLhrvCWZ1;]JtxK*= ++ .*P PFHWCWBjB5 ( a}0()㑮 kޕKh(te "HWJIMC+l* Z|+HWGHWZ,Bj]$+)ʠ1+Rt++ڻ ڡjhҷĂH#]瘜cq*"vf[ȭ5R4 튦*iڠew6(94}4Ma0$42p Z}+RHWGHW BC+L \LB+z]LF:B1JpAq0A˸t(bf\ˀJ`.'{OW2zWIW+C 0TBWV* bѻ:FRp VCWP$-wA4Mҕ4]BW> PbhKSD 0cncWϗZ`zvA +ŪxA0XP.h/h-[F[{~Af+u6n ZD)̥J6+qCހioffg%]ĔtavIUe!7)C{B`RւAMZ)]`h0te* ]ڃ'eHWGHW\ bs ]BWƠԑ 3N0h2 ] NW8  CWNW%Qҕ$j#zN Itvrp:ϻa}wD%,r[l> rO !˲*ń_PH9E#v^V2ꯓi~].f"-s0j>Ld(AQVhc9ϫ2w.x 9Go%KO j`#|u9sDsJ4YBfYI<9OdENjǛ/e~c$ r>W^-F~M zv Yu4MŻLne>zX&ty},+rIBb9JhZjZ"Ovȏ5+fOP6R1Sjs&=E=}sI{Y{5<;5Ng[?ߟ$s xf)vzQ2eqLjxP yZX=Ÿ!bYiwzYcK*AJeOqAתҸOU 3MFJ:_ՏT9}[.,n`/HcM\t<:֓ԍ=ERCRF5Thȶȸ])(erSdVar9S*" Ϊ;+Dj3/DžEǙ?vx5D0ڏjrLOr5=[Nt_ 'q9ZxR'X{gïzI@Xy+UN ޹L)S%9-Ju^g*OK<(H*Kq+$nX -Z.ZL J$9#Y"pZ" J*MӢ*qyM7fMԤ4_20 ?}RlxR=Vo/13B1<&,Ǽ] .Ifhs(lTbԯ7#請1l= 3]o7C_njƍ'2JDIMfH>LӤ[xKuoNo;41@"e#VA=[i[J˸c)+u NY))./!cDy2:Ų.Oo@n+L`sTf%*ҢPfh^"U! 7[uouWz7C{I j`z;CL3xpy}z.M_.RUfb׏ɴNZն`~|Cw\^6uH?<U D88#ybl x'"/@?lǜmM%ʖ'ƢUȖfXK +cee6Iȗ<`SX ې4XAFVZ&> Q˞{V$:!º劣A\qc%FM`'g86%j[JXi;<IxtE_`lWFhᄜ3_~K7[4;D33SNC,V"ƱJL RRCYT6/vڲ?>@3L -BZ^@V :1{['Lt54jG|Xh[- f*5Sk4+vMʻ"ɓY:_›ty}18Mg@_+0})*P `, 2A(t*t \rO.r'\U.Uu?a^3 5e\p_c0صej}#y)rrr RESJ"2) TNc"d@xp\X]h]Z1Mna~yeĚ)V TҪ*HNQU*)Ҵ"4UyA*RȒ0Li0ȑsJSQ /C}77›uVWZ^3iiĸ Q"wd4?~|zMaſ-]\2 TKs{߲]7 geZ3yd%8Mi3wbeTĺ7Qjg EĿv%M %1EkB&$,?]ت >Lvpڣ e(ȬeZ.e<ɵ,űq_6G-򓖋۩OZi-ű.ZQ#qÒx팷] CXs*z!z6vJf:=-QB0VeASXvք S9%L'{V,%F;jYVzpn#D{,cN0 f!F#VJLÌFx\^߄g-k6WP+YyD! "a_s_-^b%ҍ c^V2_N.qTN^s斫ٍi$Wr줚O'|1U\ti>_@פ len231ES]o$BLf2g4e9;R:()!ψ5𡪜d+C9_gfs*YmU%RiׄXW(9x7!8Np⽏ozS Nva1nG LLݨ U{ HrvVoH\dprA=0p 12Ϧo X3̻|~m,t:9}6E;0e6ӟU-a"1,d%t:?F")3%+'otZ&4}wh@ 9}}|w:SBB\dɧr\LMN|z9iAg\-1i~rTHov_jorR ˾-OE)".FIO;;ښql' ɹXd4a9\JǦlf@;'%ݦmp!յb_DAA'286I$ʔUr=!UXnYe-YybrWt mqqh cYO 縫|>6,,"پUrXtϱD]k<a"!4,B*+'KTMz7^0; G `{8$uc dy3xI |A8;Odw>K),{y\y?q][o8+~]S_<,` RN|8\h[)Sl9@IUx9lּDl<ңb/lRO֜?f߷KdZBR^"hnȐE0WBe=/ӛ ׯd ՐTk[J^R8>,H}hq =GwPȝjdp\#>[?pv>;u)nmKxv:?SJ-` 2)NoW"Aw44 HoKcp ,a:ɫGJ= >h$c{/a!&(J;L>cRF,]"qJߛ#FQw^ "RvC-ƂC5)e4^99e>$*>/gxVC#f4A&' ^F8َ?XG"[[aLD0GHND؉]jžو!i$~fI2f ۗ*t3d(Xt8d~*:u ( ${/6rPG?]M ~zab'I&3*y!/[bETc8RITQ=$7)be(SWo݅Ӊ.%R(f}TNu^^oT\נPѣ;L,k6Z XSnB8RWu(Iwo~1vҜ~: $B(O@4qD.1LQɐJESlu+(O1-OR1۝z[#Atc%CxI,icӮJ*®?ot|>^BVg e5(" rli]CI3="V/?b!KcF w7IO VL_C[8n)S1C+: dY[$`! ؼ'mvQJH<κcxg ghL8vΌ8?{xdD)v!'!`S Ydq "Iwn(;y}'1"Lq S;<<>KY}D?PE@))5YDRX2 e&UYTK Y{eS#)!ߊEr7qyUӒo@ea//\9F5TMwY\}* e#z/ܵE ֊Hʉ,?F8㾣* #>;#ab5+t9]ǭߖO+(۝GA1_ζZRҾ Y J Ҕky9'0]M1 n&/A0C-U`h؂Y*1v*VWk`6W{@r~2BSvFm jĀ"B O)ҌJ cBWڱ`ꍤ{]1}8zw6ȸ:O4Ԝ[,E7Qü}Yc,Z} dd)PDZ(|o1os^aDaf#Rp.AG2(*2;LU`Gd J'a)տanwV (* Ƞ# @Q?e~R %!_|h0?pf_W.P}<;)ɯ#)JPb&vJ^FMҰ ٧yr0- 2n.p Jv%[m"_y<\kjP'R=ލAq> 2`G@{O6/IStdqL7`S5 d>M2kdPS,x蹱#QAx,{=#E1aհ4 B.. QM4:Տ =E AUL a߭ GW hJͥQ;)C}yEmon=ӎ:Ɣҟ }g5 J L0 U9Rُ5#KVQ_AA_* 3_|P9CR' zo1SPIeWnQYf+`(J~4*=R3VtUBRcB!1]݇@#0v2z0`D:] o/GC !PTͮ[}v ]wxT=@=ObN)59XOAn#b_goe 4)i= ל99;!_8,Vv.H ͺ|[$S0r.zNBݐyYqof­~Ϲ?U%gR={ Y"+>!ŬUXQ8g`!Ha}1 Y ӓu_ JJ>ؗ HNvF&v \L)cQSг%NKU )I,B-4:)w:4 󿵀1>;AȪ(RڔKiJka34ĦAbn8+MYP/cBcinS,Ruet%pAv!.3t̝uP}% `2{[RWcAӵ:agv G@=3s)VU'+'v/<-o23mD(*^KywL f$p vd7Lhx b 0tlkP%QcN^篂B|,4eBq]jo7O?=ZD uCW^^?Z !avH^2*RƓ/&F~`:7WR.~ !j]ݻV(wou]%BI̙vyc\ $YF,B-UOX.)hw|"EuHVʨL(h*/zz l$h0paX  R &+y& L2 eU2@dmIdwpJkL){<0uT>/;9$ QMC|7~}Իv++'y9oOLʛRd^δR\xG'bJgMULʢ*5[X,6y)eG6]nFeŽA@v/N0L_Wk76g5y-Qp:3?[nٱph57ʭ*&׊r. E}OA-|y"u76)Ѣ۫}A()xVJaJB(d-Xw(Sf! ~\!SM lDlͼz#F_H!_zm v(2/2c  cdjӪnmtz'*IfU饴78߸$yJ*5kQD/MNAϰ5Jq(C :W/?pǫl`6;̉en?SeNATbsz*{Xg e3ﻊs(;ķ?&E99aZss7+Zt;uqи8/gRs6{eE58XO]R{̣=F9e)ƶ]`’p,G(ḇP_`U|:w0MeO;~8IhmL->?"&! JyCO-3ih)w6!T'.sz[<(䔲\3A~ ^u+sqBB鉡jpPZXx6ʊc-a=f( f2Hmx?ioY9(* كn63^ |Zk !uhK\bhD̮Ib8תpUqRy5a k*\XVgY/ 'tܛUK~/ M~ҋiY:qݶSC}{I4]t#1,tbŒk@IBŐ[_0ˁCY$!,ڇK`2N_#)j.RY"u}67@O@6 ^=ts )tigo4t$V.RY:tV:~= ' .TI.IJ39?7>}Hl=f:\IZP?BrhoSH}pXG~4Zn|p2w?2`DH \;nqꝪ9ڢ&d7 @TQ O>]Ҋ~C/XAɩw=_Я`9 "9_s$u揂m86?4+ףN>L&ivh]Ċ 4' ,~lHd10]%Sת;QV' wfe0FΠE~'A&N{r^p Oq1̹ǗŮk5fs߀sag]Bq@g&EL'̇ϣ'yf!Ʊe/3,R!95\'/9GuXu {KEGA.s*7>bTZdT{I6g ΐZ"{[2!7dJIU,&P|mH%3 d3e0Pg{hlQ2^wzNSnXCYJz ^xrDžvV8ݎ (bGuuvz^ 7@-Ơq߱dՔ>kJ')Un\EyQa0:5։#8Bh[ ʋӶn">,q-$HR&߅F(cp.:\=̌;N=4_lG*x>v]m{Wt[- 6v< i\ urf!F?TގݵSt| zcU(PN$0R~Qz&3?} Jԑl47w9=4.M1۵lJm52} "xV@F|@}7\-%xk Z ]b!߻d$4{W|yY|x>J:E $^] \3,ΔAY$R#tXk !UuP` w$y-&^}u>`^u{w< }})eQ[qgzقN&Z.h۬FqPƛZp.Hz:TѓCg֕Ix0)1/.@ 73(4Zow0 #]H"Z-7D@8#ːe } r]G*s^6 @͟|VTc\Օ~JkYy6Nd4 c9~q(;5RbQמB4xu狍799+,7wVkr631VO(>{c}qz+ǂCd"x( =,}*dQEgUkPcOrdOɴ|P~%̧omYΊO*b?ZY)%?&+;fug>FKqV&l͏G \ ƃg2Vˏzl'+>!õ[=q`*n=į-G"!F=6b6v(F+?øIbO#3HOW3'XaVF 2:li6)n<Ύ.'YêɃwq'G&PR.:`!JG$a}a mB_~?ՓɠBN>?_?&D3*Zp'(:,)DTiJzӠej2ڂZF+/X=ydϟ?|r>Ty O>[O~{K鳑w#,{ҷg/QǍ"P2/nKhx0Uw7c)րfӥxGI\OP.Ra"޲&##UydeCZ"@ ~wpOln<T=tAkԞ\r%#1Qu7J s":{>%ڃq FzӸ:I'fyp̸T">#cWæK cl  HU\rW `<KQ;P|[vk1l{p,$AtE8 Ɉ]\S!h=x_HthhrYqX.y]*aA"`f-)'WT$@@#$L+7$ 5^DBxhN9{I`lM̆!o\. q h2`MO N2y&a;4pEc[J僬!rwH'{@q ك0DQ PZH{8T]>NFXgE1\0h V~*k8B2S&|"g;t*K%z- jDgH5r"3dӜ J+ Yu6Xߙ U(p9A,2W;Χ>0KE2JS"*<{hl"Ϫ[g/dʍQT;Ji|=l;s [DLD@kYF*{$Nr@f:⾼&@/VS9 j Ckm4PD.qQ-Keh/qvl_,yw[̹iMXtA@i h{#0kBBXy$7xhKȘ ]? P")ZeNn2맯hlu_)TεT9Ү.i]:M IT3.7򶫜mè}k έbH4ϲŔwV 2B krdIcJJ6v:PsEoPjf%c5'3Gwpw8JP+ BR\l>̻tyGT֪qD!_G*ՑNr,g? eӫb~1Κ CPFw0]D -9b2dcHtdvs Q Ed@gF"94υ:ϭrPH8d*˥!3DS2;ϒ[l"sAOetRr@e@%: *6 a+dU8̤Hw`ud˩rr=78q$;a,]0M3</E[nMіۋe5uU4YLs:":h$y;22{dV4XיOuFmL#Ydבڂ[(Mfg$N$ܜR4O5nr Ŵ*$Vl *tBax 4(6 vH'$1YRPBodƫ0e&ag.ϑ㊊orSwW`?ˎsҍ\@:2zm1BF4(O'3L:.fǼxQy[!QJ,hݞS iq`e 5;4xYhET^5L &f2B2z E5aE2z1|):.^[,kT`R",,)CaM# ltX-1PxRtS!(h;^vtႵ6hlzAPrd%c0gL| lwGܻ_6g?o-_e ]lw}킮j(N )a[DpeX&Ł;62A$`sSγ=Lecc^Jw61%ũL+R8Nc E3b .̻) Tƴae:j3 m:[~8Ss2Q1r`ER2Ќtd|K'%Õ)M!5ꄨFpɝ+]UoN՚JD+! cZfF$}mثnR[9-,ձr"uT3pwF:ul&4)ۖ )4GG 4 2[ ?Gƴ(V@7˾ȊhVΕf ~BݩCϟ=(441`Zh8f(uj."tވT HBFrgZF)ܡ1Z7_~oEQP%IO(dbXSY&W{ 0eV )xc"B>>|$,Ui;v7HR)X,M¶6=iy$OiAMJb!'[(jʥ6&J#* " 2&(b$Uj&RDJje Xl|ȫNo&](Ų)TX[*㢉 q 3>8<$+pZ"x2y\O&Wy^Ș(u'0'$R E֥xi<لAz8y? k`(?rWJOd]6ToR8(sqE)Iut9V:(e*A1ұMvnҪrC_UnKe=S̵=yʫx L*VJV {^ʮ^zbwdhO rl3:Yh2=u0;GI"T"p (;]Q:cyL/7BڏP~H J͞1˧`ee-Ѣԅ^`sY؝Ϸz)^)H;eGN8˵Q C+M&;4Ԁp)xБ?ZH0-导n~Dž3f+GqZPg|^(ZflVuˍnn^ ѥڨ S=ݝ,1٧BF:[ .|8])72zZc,xt7Z{Ңy'e{Fb/8#2s(Tbա+y|~})Y#3E:]oYi98zvCl>nsϊdD@Kxb!`> UFP kOs?}yɺ{wco놌bl6v2;<p~5sث6*c0`}y?f:F6G̎>FōMFlt6q͢7mz*C9z>Τq3Mcf^ 1V%^7`8"G;ItH|~WoA1(V1RY,ұd_(2J2Kzi[xP1H + W)W0$lXiU&DcXL34]n^")WUba)=zV,Q)G8)Ei n:HHoYkIƨB {?S %ò\#DZP#4]xVSܖ;J i!:2VN-o(jBap30@@ >oD@ Lf-MΆ  5ѡT@|*N]&~ґM(ΐTb!L KzӋ&Un 9c*>WCQxCm)`0u€ 썁#.l/Xv kl$Rr&|F/V"℀cqlil8T~/-ΕJ<tiݷܴ#nsͫ饩hJz!@6M򒗽}Cuҽeesrd\5ڼPeNVİA-l ( ]U u[$F,Xa`YC\2{|"L㌖Y+8pb|՘^^a5f?,,jQ"ƫ|"E{M8:ӨK$0;煳~uH1BuG=V=UV^!U_2g]C*HXpH%`GPJאʝy~Qe>o};+Unޖͷoꗢ4EODj{d n_mb ȦCJ0SM $s93 PAV0.um^dl$\PBό0]&fPҬJ RJ; %] ȖXB[̙w&1aJ+7"mhcp1lYDs~2dDWԻ/Ɠud.,kܳ' &QO(:I1F3galjv> @mK[C㹜gEEsXrmH۞8Z2{fX Bv}Zqgﴃz$)QN7#9%uJ͓qҶ5˅. d0\ȶSuU{~ޱVprG+F\g4{O&xnPo_-yb( ϱr>W JEobۿp[RCCkz0JN \QK{w@mڰ2=YC4.WI07l̀K|KL\^M\k4'm7/W,M}3Kl4ub.^p-/  Pjy ^*5HMazV2>%Eg'V+)j!جcMLX9:~l_]9'1 G18& !yZ"]mZa\:{Ry)y)WdRh oS-g[FuWGCS֬1< ^sCv&#Hr] Ĕ*d2TΗ&n?sB`r1]dKUf^ǑT哵]{&+Lż)KT0Eb%)m]NFv5rQa_3j1뢒mFtT0>Li0ۅϴic^gp^ ,WS: 9Ҿ̪qfDσc^Qa]`re=S%3C<:00vǠ9j~.""/_Ά8ᛠL2$o^28M!$6cEi[EȏEf'}LjLl%jTzQW?:ӂ)~#N{.rφ]:+N}#5E/ {O'GZpt: >I0i~'dExIM\|5̣t ~GGjeCz)'ŋ|\+E냮[(XzW\\8} Uh_!njq>2>5DϦr[gR!IFJdueGyօA6J%0(JR< R7_{g>o}[.e(K\b]Ɛ\rp=()#]S؝ʟp<앫OCSEZ#C\!O4hz*f<ՠR!cBcLo#!2z5=Dp) }r9p5HBFD.Jfy}F?ZJ"o`·@Ϲ=pPoFĝ }*Y1OXmA}3\cj;~iM[O?@Bk00p$Ń6M"$%sAdVB)MIrs\E^C+j"餍$` :' 4v\Ib whj&w6Iƈ Z~{{0!<yѿn`"QF֛l};+r2N#sxxb+G-DUHD*puf,9(YGqJX%$k+D"iw^`^(8'v.*v\.cv0SVȏQ|E"4݌{Q71NZK%R?l~_(/_| lw2Cu)jt;FQc N]&1ŠO_b+0~ YF'DCόj UgbJl@g"gwI̳Y ]01ȑnEf=店U׆3Qۯ[ 1WQ(͸D낹T4 ѯWzǚIAMgǢbj@*1;3LT!V]y3re[+۸ovha&n%@!eyI<h<tkfb!krDžgպTw#KI7:PE]K+:J޻ZθhSLCRB7Y;FҎn2 yݪc*aHU}neV4CgO_7=&l2//?z$ .[sk-1Y\`O'q P\sk"A_hЗzQ5 x m:9yj{cXJs:/k)6ݠ6n x4K@69l4&сw~~1{?+%G;'t%$\Ocڐ&΄iy:?//4Jr׿ƠAy׹ GZ\Ծ8;M빆S;?(&!#}-s Nט͖Yt'a-=HŒv}w [zROsTЫã/ߏĸ+>'S&¾vH{{bwYl#?'^6}M Bc:]sz7eZm )kSrmS ,cDԔ>jJ0,|u\-˿F"!bx2#*",I˖bHSS*:l0ډs5c2~3{IƲR{aAHP?TgsP x /G[1f~_xKt^J~fiqO{+bs} JMuG>wtoosɋύ~fMx {ƃW_+f?㝵g^~Tt('bIkoDnBF.}툾v`}O'ޛfx.#9Փ@g P+6}DǓ}Յ`iH`Se]!f_&XLirI)RjpMX9!/}h(2:&Ǵ=p+q @^ӣ "MU4٪{_D$AK H2)+4"͙F-}2 6xC29ԹHeq_/q 򺾇9lbgLM?Y߅,UJ$lMQI;+Tك6$+Y.)ZQ!5MPw<.?eY FJIꘛ|8' Fw"Cf a#&O^ϥSZ:5RXΥSK ufO鞌$rT:u &@l3=svHϪ^P< 6k$8m,,j}aؙf+TeQ /%(%҉dA@+o1A>n\='/˳± uZ/큅Ӳpkz6w-Rlsp2 (d 'j,@:h5&g4t=:ќz$ !fFu'H"cm2^oМ`߼vrW}?&J ?=gZf]nHwadQ39WjEp8-Nge)wΕjevcHJ =PԱGY3o?۟:bg?/b2.}@Βh~88+g3O^M3H_w;[^Ɏ7hݯ/$7Mp@Ljo`>9~,09M#Rq"9PMM0 [׵וg_$o5Aݿo-H}=F YOoKFG̵Q#Ad53fNJf 7Q77sZTYkM N?g6CLg_$6Fq8z3w{,TeK "@Ұn3z.2tFA'~"iŊ Wv_˹s&=%\ Iã[1ծ2CΘ^D̰xb*t )E>Hb6YP9Fe 4Zj2сjWG4̳EI!/S[p(jSWdѽ^ܪ6k0kU`k m$[iV58Sk"zvqN=<OeO){/H Wӭ J51:T8z7nnŒ2_6s(RԮ9Kqoa|`WlXү磫m[~{YdȰ(* U(1M$)FS֢9S:-UA5ʦT'ʮ:]BFp >l]1d@PAnlPI؆B IlɧƺۚJ-pڲ'n@"[6C#p$rr8&cɡq6kL:gتbN:!Z%Ğ2H!=$Q [d)Vd,[zBlEl$1Cb jXۇ}[*9?Vՙv;S,8bIp%QHT)yG%1f ,Q(vhU,E=T3cI5b J\ PcrVaHn{<-4uS \Xw:]+Bbf2ͪcTm@εLcdչE(% 7mQ1L<.Y.e79d*h2:[KJ|pɞdJ(_\4UAwTZ9E'G~ΪYUjr dU%. \M78IU7>7Gn]v.ӪN~8Z=8C&{vR'˓/>N[N*6s_FynGF&%1Nt C0)u򗃡N2' >=vю7pY'Xʙ[H\!o#bck X$zro\$ll5Qe:mʂ5y`+;k Aۃ&C,yDMJcPY"& ܊(T z$zR^C@lvn9w4b}iQTl#zi>|,G li$b(rS %/GsHVrWzL$2@':\g$O %SI?{WױJj!Yd<A\@Q,/:-wiybRw,,}U\ L)S53nUa$DJYCXF:6Xz+6B:Tܴ=/Kv *kmު" VTЍTEmF**ɑZ 전P)HKq557&;|%g?Q9LhlĥrJ#O|b(}dƒ?Hj* #)Qbmgp_!їLoN.V[2JJ8'uQXM"Xʇ=0:ph(e]I &P SwnųU-ec_NAg' S]u)b CPuZݜ+bE}ovэ[{vX'Wz9kl8Ýf8y̹.KQR%9^-[t4*̼Ѽf83o+1+H=YGݻ$MQoVPF2:a.U4UP neeE/Zfaq#s BX͙/ޟ^:Q/YTEr< Anǣa;<}eip=ۇ tD; '=RxE&qZI%m?V)gi,z֢m / GV!0RfhjbPlmpghÝy Z=N{+raޟ曳vXrLwxHULȆsGcYAaUݲ^)ƪ{1ԩdk T@b(;P)Pu-4ù*`O^Hm.<&@:xR[C.xU eb;quS}CqTM;q[[fY!M;a1TlףQ Cx:<r3,kq4}ĖĝC2ny2qTVy 1hF GtLjWa^zf;j _bDlQi0DW,Y<ڨP>ӫTq5.}D΅'x澙@d@vvYOtyv6L蝣-G"Cva𺝿?MRIHDA Zc6ߘ+:``ĦS @miIh2%){kH!jTOV/WkZ%JJtY3#?`'<=SNM72*bE}&݌.ftq7]ܷj7 qbmmQYR.yuM'P]U[MmᗆPIu߯B^+0n/._ߊ9һҾ c8?/__]v} hl](1j;U}X%8.M/Vvv-vvqsq57/?Amr8e +-*!} [jLaIjNtV%8bsFXs/TǵrǹHQeYFȳ r0g*5;ߜ7"n3jGT \U&L^%r?t/i*I'϶_k(Q*8R爟DWY/NT0`Qݙ7|R,ޫ[Dȹ%VCDW gnz$Yߥf.#$H+E$:E0.Y}@]\y UEl:`&iqXEإͤD2^QZUTJt.|e[;.>N;w Czlr/-r8xr`yl)9(<v X^(M2)k c9kA4 v Qp:\#m!D陳;mJ|iVv$NFcKl:e9{5lIJlI7%Z~Cqf >o쐡/1[@N~| G߃-ΤT:-}2mXwǜKIQ2[[B-8͖Qv)VyտG>v,,>zzȓq,~ݔ@X=ˍ ') ADx=)͔ttW*ՐGj[9qw=5yhzzɰAq s ),tMDD-Qq>\>N{[hn?z|GϤS (0Yd ֮t.y% \U&ѫ]z%mg+t42H|Ռ}hg+lYq|&b_6W8 W`1IEPJ]GNWK EGBI-,prq>:=ÜqdR0G9[{UD%z9o>:8bz8=7ù4ZC:$UGUˑ jC.F$@V$B14|oՏ^RbVHm>뷱?{\U D Z}<%T]f3~}ZƗfo/fMQZk8 HTq۸C3 .<@53KxM''N] rBYTƨ^+r[ϋrHvc1G$TP1(Ӄ'_9bBNF{||Ͷ{Sv;G% . k.b.TѸ`6#h*(pmNQ3r`$07y<Ծ^ y<4Id Ɖg0O6x\xdt5Kw{'7 mNrҫo646bR "!pRq"b8Ўټ)y)lb2cr'ъmjL͚;!P^gi%eWeM޲6*X8ȉAe/H/cÑڼ%Mz>~a~1X{inl6NK"gpi\Hu^c;.9SP\l<$qWhk=3)'*/D2Oe0qɾ}>!_f]}tBp)8%.G`,6i&lrX̆~x4-hL䢚VIWҽ5\#sȴSȘ-#H4d36ၓ!v[)F%qs28xGBObl{8A^&La^wzWuOk{BCX(|К+C;x2_\: _<[^UpՑyy ˢ6[Jطz]j^sdB"S5R=fb|1DV*=Jj|7bYFY5lM/`+J3 ^Woe- |!/<M*ýV]jA&jLA[Pj"^bmHl7k~ ]Dݎ8QA쇜cQNp\4[+Vzlr5+/셴42>{i^∽52q[tɳ,GA4o^K5\?2$Ws!0O@3Q{КSc"£8(ك>_&GA';5isHz`sb]q_ە+W0`JOa䣅3Ġch˓btTyT -mk6C.~`6ۦ)KCB[c)o[% g/ Dw< v'K).<3N+Ё E wR^ͼ7Pދ!nz0Ϋ'+ݥ bh HvlAuP9ٌK0dVd[dj=i0r4*95*$ϔ0E>WoX7_f>Ⱥ9HDLXDBd>iN,&p\y<&mKSiOܕ?^@B; ,*Cu8;0&K3ƺ_e3gw@?TYh#Bb_dАľ-F)dOoQi%mW2FSz6B<$È" L+XJ?j'HCQ5lUZ,`2Fgl* Nj? }zM!C^i!1RLp6.?_ r15Hv^ݳ^xwVzml4ʾHc@*G๸aa'Ru6;|Pͩ66 aE8|MUzhMrZs_(5xp[g\_ y<;6 Puo5HKA.gH4NE߸ܯ\˄Ygk7ɚy'>k `C(E #1B/a 󁚹+m ,ŨI0~$ag2 ,}=+u[|>N7Q1( Xv6 "ZO$w Yť2tM>Uӈ:a0Ȥ\XC6+jl}{^ji1sqVΎ$ /Y+棼ShލoW&d|zPI+_ٓѼ~4oٷ*qgb9xۗP{a5iK V)k~Oh}hʻfaℼi|-I1fsGUAefrш+9G)LI|t##O k$W-_r-K$/&rDQ["ޖ`ߘ"_{_w627D.<M( ~g"Fyi}"`??Pg=ݒܪ*_Uz!9q1✕3 5KQfxCH!)"CVj޸2$ fC<17H2{}15g/c5\=}?^%ϫwg PI)lހ|j^1gaZ٭gguqJҊAatڛ7oWLy0؄)AM e&II &2]c{t<%X%s-;Od-*wDh^_I HYDæ/Eoo$6mLbSfzci^nȠ}[c-6h3k蕧"T@ňGBPi!4ovSSb~<ҼR+ KS4セmVjrrȣQ;@~! B멊u%ȅg21i~ԖɏCIi%"!vb7>=!2~MVnsY F-G [=c YB҇(J򉶤XoLxҡWl瑋~'?N7??#_GW@?MM?_susG2wf#~/߹wvXTgiʓ;SMf=\޿_]Ύ,jd_;m{v@凂vl3sѧSR۠ T,. ??cV0Yn j/bWO^{bϷ7ߞ|/βLGat«I W[ wjl*/T"+,d5߀|ĝ >-uZ-6cd8;lwM@>S&YL V=Iڨsj3x@/KP[ C ]x)\CG,F*>sDE #,G aJlуWe!xY$nz3gW4ÓDշ{_zjZ1sIrپ{#krȾ,fRMҴflu CQ jUkNx[媻1ٌӫ=YS:m,.#UobnGu"'HD3cm5HR=9Pz%Cзw(ˋj-7Х\}R1A)'Ұ\}ʌN!]w#F4գTF`ut{"ܙ!Q>R{9XfܼB396KN}w<ڽ~_0:Wb.FBb<ȑ٧Y\c8 :%6xcSP|텡A8|ۙ?\o";q\A ( 83]xɂr(P) 6j\~5">8( )EthB]nd2w:ɸI" dŷ -&qJ"jNqSL*2"i'س7ov԰RF]ҡE*9/UqP+-#RK&Qy DQsDټF:e@\[Dަ|$@Ô1Њ8R*ۃx CZBV*bѨi<)@+ 7K>]d,b;0$rɿ-+=.D1kuDavG|Pl&sk!"!H%lΦIm4ƥ,be9 w-{0fBQ靥AQ~ "Je IOIYJWYd̷ee ;Q܄?GAf&%u)QӒ!B2*Sxu׾8wVU:A) ^phrYH.L c݋eY1 rh>-gf7 h M *6>vq>>$-Xv`HĠTh}䲡I~B$ʊQ۪lX#D{IfNJg%|1dGF;ތ$yUs]1Iy繮Dsda=P9zg:ɈFGI]`۔(qĝO{F׎C @]k h;=ZTꗵΡb"6t<%ŒƭTCdyg'?7'V©2-u+*{WG8Ap83`yY/Csa&,j/Ί[ w̄b'd3j3]I ;DY$SAHCSЦvLIa6{%3)JeŚ!H~FIj>wR`;{w2 ̲!K0]JYq Ն!!0{`$KɍyXjz+ iJ _]f8 3^.z})VEuis̤|^9O˷ ءQ5j#Qa1רSU+=QEjԦFmByԆG1\ qP/RKW<Tը&Z m`YӲM6~}b v T bZ\V#xjKA\xK Hfmljlb}i, <;τ5 zo˱hΪk Sbnbns eKq*ϗY.+Ǯ$=~vR[Cn (n:Mt|'1m:@ܜďFLi M0C]ofՙՏ OHn|XbB*įgU$GB)*6LM޴@͒g ȫ_.Cj8\_^wl ', rw&.y3A. @[5H{hA.4Za-BAazgA Tce&`-ȢU z kok߽4}0^CyFyb ]5<j'@yݵe˽KތTr@*.8pOA灜Q͑zv_.ld#˲>uG( &'5!:S. Br)7?n5q<(%V{5yTLZ5j LZ}QC0>IU;е9\{g>CZh_P ڣz 1`+ZBQswU򟾞Ĺ~L*Jpn:Zf:׍5PNK!XȺ6G+k)<#)g.UVi/F+c [VyC'? yE|n}V O i  /6:Cb\Dy3 dB4zj6ŠamsQʪS*rӯ)ˉ!:tD_gpr_3܁Kތ䬵hQΫ +?rnRYr B|-r[u'!aђw<'Zb~|[•E끐$]kh!uH۾#gdQ[g6N첊NvYM'pEFVLngODQM-7: iW"i|P!Ly.k[0}/b5CBv +&| ձT [` +{TRNpʱ_Ts\sȇ7 \τ&5T>hdZ{\Ty:PWd3Td+N]i93,Q\U`7ެRnhU Ƹ~A륧{9Xf(E2Z0z\aNwQIp`Q9҇xY1EY\%Jbu\ynIemc8Wדˏԍ5-L9Hr9 @q)G~5,B։Zl_ |>e6 (_~$Ƹtj_wC/? IWx2a debXF1T'xG`@]m^3gj hʝo^p};e7cp.O?= 8[.}̀܄HT|Mgkĩu(5siUg̽eV؂2?7Ţ%x+:B{ݞ}=)&b{i{ɢ,Ш a+pq} AЧ8Y Usqֳ h 73u eDɚ.V ӆg˰VOc}–ެƦks4m5\97Sě\O(oļ0cM}y9ۙn6rŸ;@O1>ᷓt~$gK8%]zqt%6*[l ZVl-z Ƞk&.]hZ ~(=9j6IQ9,#HC5$ӀUaBhZӏQBRK[g9ܑ1.Ri+;ƵηduЏ|z/JZYInn"0GO9 Ϊv B(6 ӆ( tb*GǚYRZ6D~OZBَ$zr)6Y;9m1'&i{7v, `:J~RQ [9S-YACm 28NQ6QQ2Ж'z cPIZBEǙ5M-THE*)v, nHφq` Mdk7hN7n֍&u^Ipƽ8Bp2JXKW>xuՊAv- tNA!D DVD2M4h P5nƓ9c%Km/8 7c]Bv\H@n$6m@u5/]Qv~LjFV2Ru]A0O*|Χ7يYC9+ 賫{s"N{JQZ쿙_f~A~EeK}߽'?-x7Lp;!8㷴(9-pr ho*S7/pim3.P')CI3p\LAnJİݩ&"ZHIM z,H,áR5^hRJq-HH "GY W.rtL#ՐAEgeŌF)=&+K yJ I:+欏Rh_ծ9 T.!ˑ!R%S:nSpM19_*21wL 6@ U:~H\&{rEr*S 7IN&e/Wi眿mId&i.hwS,yNjTyKv u䑙?q?Wy]`62^-^B)@hhًug.fI'7"?$G6bUqI]fߋzF`kp QDiMW3e__FeȄnƗ˽?eg[ Ƨ_U6@^oxZx;7InB,M}O%͌PFeց?ePvOw8)>^xw$'?o[}m.Jem j7U Shf`GÐp6DqzU*$`T,ׁv!)9k=:  6%(_ 9є;.hvƷ(Ӝ\etH͞mJ΃bHƛBp F#8jG.CR9W<}D* I=vjv&c('(yCc=AWZ>:y 5ҕ:jGu.!:~AWWJn[b Ovtl$l c .t_ow˧) qC3ꎽ5.ڈwնJ/w?Ukr/OOzlk 6QXCH2勵.RC BYuykd=$V6QPJp7)AXT auÒ/r}?Oh%Ԟwr.$Z\.GT]tɄQ C-a@mu>2?u-v {/ 1L3'*&xYxAAr+9U#{ =Mԕ3Gvq>4_[/aw#4,̝/#SC顡vRzxV=6츤cΈ:%?6'PީaGs(ICiG!gtߛy۫JySy'd +~^V7q=.{411֬fZ̮Xi[̾kh[|H@P2`TȀSTX !dLu5y]W(_~eR71Nh#iN3wDEH,rB8g K=+V(D(A]"3L!ňt2Yc=ޱ6`D%Н0yӞ"7)rӞ"7)M{0 {vPr:Csx1Q/ZlFD͊Cyj0hJjCOT6ke[BItm!Q?wǝUד NkH?Bw[h~kGVO$2~ĊN}d}i-3o|K=ɴ\xZ=mNJ~ bv}@XzL+z e{*hŴHѴ'd:9^̦kv/E(q 4*Ń[fGE"7xfd(mVoX{bbw⾰Y6gVVhw-xGg^^.}-o]όo6咣ᇸ*E0ǩOtD ^BI!DdJ>pYF%+^zz@Dt &S"$)ѡ .AHrH6M=g-}ɣJ_Zo`*Njf_Fh( h K@'U#zU p\_?.! >|lE4OOVl.!g;^< /EY:/ 9hiZg4mfo\#m7N[ۄP&]߸`'e=+ǥh OtXaTC(=:dt;Jh}ɝvY}ʂ*}vu/Vj[w<2Teثoﯫ<_wˈ뼼>3TAhQ/*?㻯Lsh4嬏Ң#[gd'og{3)y֖I͐ݻ3d sug΀FH_2r!)h)+_IQ JdjbbK=I@Me+ W Wpon)_r.d=W6oK>dSA<=!owblkBCkγ?p[m(kV 19wa>$9~7|Oןto:5ԟ։ pgiWըʹ2mwmĶΚyn|ձew*r)ܜtL8ɝZGk͋7L:&yjHh"+#W;9p4B:8;#kɆ4X1'۸_FNGI֗ ?8FMYeCK٨ p>LԨ"S?ƥc\Z?ƥc\~Vۖ4BaL`)MbAhqÄ#C[9U SŨ]}m.~g$g1U_oG-̏L&_읭u4Kĕ_^Z%!nV1_{veU|mU)_PDe} BɵZC8Di,H8I)v4Gi ǠTT*"&1e)Ic,b|CJ!fP,;M-2_24$]؆FI$Ij]!pHRX I$1RHC" !^O7XyX9UyNY'Tgau֍ckAmK!ƨ!HDJXSĔ\k-AЈW;m)K:((-ba3dK"0`kIEBBoN1T[]RU$(K c>N}{5 ׉a*u.՞Th[/:G&t5($_߼#?0:CH0H5K/. S\F6Z 6ͥcDi$`5DCS5J1-(+re( L؉ZlOY"Np21?K<_l'On|~$[o~ fh:ࢼn 3 sM+n\1}C*Lg5r_鲒T?֋J涃^:' [յkr•ӹ!BdH-+/El&ے/-7`:xѨsջ|4`u2?;'(jв"+5rwW c]PV`΂CO"ƳMh(M@6t;>9?l/0 ;|0*'2E). q`33W5l% q'ۓbAvde6AX\s,ѳ] mnvn19 Nѳ[;Ja7b1W 6ƘXDR*TJJ{xʜf"g{>r |9rGUwG )E9Ar7tYqT,`X5P@V9C5[娀67 ؓ|B; q(~A,j42uUN i+*_i6\I|2O៩Ko/]}릕& 64$!(" %X0AalҘJD!e"N+JiU_^ZR%aW9p%_!9pBxMû''pu@THJ?>RhUWz\B$!Xi ^R>'Q!I R*)IR$m%H KJ#iDMH8҄}bdBb SkV\W5GǸfjM _0:FC': +֨0xc Wg3ͣQ]T3H%P~̵)Fm7:f8?~>,z!Ez,KAUA w&݉TvxaWfZi(";vǵ3OqeNڄ]9Ī!Vi80Ď9<+3x9+͡"r^N9$~O!QmJsH5qa,e7~7 RkBernXӝ4KAį$%gZP_eeeO:fΤd8;0yZ;8i=k{&qxq22Yo8>+{, Iaj0۳sn'cx\Y@״;}ΰz׭vLݬٻ{E < )if-N7ahr 4C]X tƝdtn>ըO--G؏Zxz|>sL~=ggpw3l03X/_O/zwp+GfvԞ6t&˫Mg<-ٓ/aƾtz=_|=s+{wƺ§VWui%=6ʢ=(4vWYNx&os;us- G0u&<g)%}3|T Bo{1_vQuO:23 y4;3|ܝlZ}錳POKa4O/cN_zx3܁*JW(|' z҆cW{ۿdܘ"d/jf:{n?{Ȅ_rʞ퀟X}R~ 6m?t}=}7LD85񿓿@5 'WV;k܁?W׽LMO8;lJ̨yHGqse9/lr: WwZ+X\K׉=߂D&KLR(ATR#8*9V?vk5LX͗|5 VSǗH:"q=os8&ik3:Κf{9ޟ,p=;Q[.M'+N4fpNvƮ,3ϑ3 V&NG?l~އ wHXGɛJveik_NeZB1d|dʜƳ-xome9nm/m%Wg?o>V+.ѣfqL.U,*&ǟP}y4NO  go]7=F۾fӇNAA%5j6%sY^ݦ> V5-(҇G@ nȨf>v:;9]eIɤ^a񹇬!Fe1|0ߺd/ȲEv&p3ozwn&Q)vm)ד D~I"U wS cdwL*f% VyݼUW[qLiSP}pzڡ딢ܱ[;L{ryd`qXvw 7o oij(VVY޼Y9{\\=W(wZ5޹ٲz{RcOzD_+v,Y$?}>ΫGBE!@McaHqj"l|JYYRoa;a+~Ǿg0?)` W & ;&.wݏE}?g^_}'aj}PQX{8jo*CCFq{u8e͍Xݽ8v{$(8ZND#S015AZ(m]F7)^6Fu2 Ҍ:`V,F8F1Q 7U&5_lRtxpJIsi)܅'ϳ.skwڸK6ɴ\bWSSYz$aPLR3EQ8ar$1i,SlX?tG1LtdL/c9b^^&0FMSeҵ(ӾI i zPĆE@uDH0%h<2N0D#J/smC'p 9>Ư$Yםrۋ_uxǘSu5 CVޥ1ړ}SO^ h_tvVLnlŝ`y$+DW[|N'>BzɊ6_,OfN?{u:y ; A%8:)$Io3E}>4vzOϼ!)-jjw{}"I*^b,H2ȌG;:Ouѣw{; p_.ow *T0RU0(x J{^r;a, {gK-NnK>*JJ{_%{_2ߗ//Yɗ~zǗd9n")`\h %6%+$\y|}9~SV!Zw!VY|+j'Hj%-S/W|e/l üyoxa/tlSK(/^>q15;.^7_ȨaNu5+ZZ]k3dV*MfF.$=>^:MP KONڵ8sIs08@К5&+ӈNw&.dLhZ2Byn豎!p#ۉpXlǜv< S|L7,?y5Kz{%|uʛxh2VVuT$<_Q~:8&Kv`QOhNl}5`ƻDw(vIH5轆M0tm?wUR;J"m6z`[2R)rǀ>yT\ 剋!f;|VH&QdZ8s֊nhؤ1:D FVIFc1-1nE.?|t:tYr+ɂ AzY,_FBZvJ HPآ$_x:rӗO» w;^0DgUSPOJ.= .!bxUi؜U1~e) 9i:z~3[HmG O\9&81Q:4!:pOVVyPmbR \+C41F[\P#=BQ0g 15YYr8i[=f ZP[.A4UKnlؖ\ial^ѰPi]/gR!IkM$P4x1FBbj29)zBȂ1 b\`ƧH4>X:{x N+,@G&LI ˺&``t^`i!0lt\{n\ P;JafCiǦRd/J 8]O_,#E:u|ʗ۔ID~X^=w 43􇿽k2x+ޑwoxAnX`l4nn2RŒp_Ry1Wlg?.^HIgЫO;f:no4-"Ҫ<뿀"0a~1*;5RՈaQjԌod3X\ c鴆KG%wh hAiU {{B*1aHݪةV0`A8VDiP+%viLj{r~ZK59+;]\M>`n+&(fo~mqi\Sxc O@8b +JʫHG,ǣTZ\|9/Y*J'$R0i2)=垮#/f"W+/rqDCTPTG2Qqxo ?n:n)zjj!ZceY˸`⮖qSh0-(t].K8\ݳZ՞8I.+-ogIF+ͳ E@:ٗ|!{a9 я5hwۜ&,=DZ?=ǂP ]h?^]^o! _+ IwMHz.N(N)+ǵRBXWrk&3uw9 UNf!y;k mZQp: 7mL7?!f:v5e[H4*rk'.PV<,,rdh0J=_AnhA7k|8IZ\L>M`k|kTVY"OZ% vaϚ&s4:ɬ^*>yTE|ϔITdd"=8{P)JbS?ji ߵi^X-a[4#`gŲ)5gM ֌@i!iM;A[j2}6%.,*v95(*%̀ULIcK ͥ O+T5Ҵ|8ZMÃTb^l415kVoZ%`YP* X-(:L e[HILNq(INB%cX=IP-dF{KI5PRZxDK3T9kI}LQ'x "y:9X'RF)U8ZebF`BYᰂw_Ta#R>EP-.j%THc 1Q!HHb(-~Yu g?=h͍կi\|':5z3 jZR.iQ1䵾u:N)8P.^ Ej4/7dz|T8J"kTPҶ:) m,*03Z{BRz7qҷ,};(|͕L ߤlt,y.#ĈtS<⛑!fذM5뗿v)hwk]i/l8Rg/hD$s鐠$SPeJ1IPݞzﰪeR$,_r!\]nU 2scȕ}|sHVS&~uz;'s0zrǜ܏މoWSs ]+mT+ #$hǃ~) QvNѶ(8tWa:^6wֲ)7V|{B=˂BC.ʷ|m%O!;=vg0H@:,$i<i̗O]* tT2M!n)٧?K?Dж2B;H*u}>昵(e@4ޭg}BvZzX?&T[!DRG%1"LZ Hv 5TmVq۽*Ph,U*S;MU㖊iĦ5&{&P*=4dFmV r>C).S̚F10fp)Db$E3[ntZ[']EC z,5{GC菃VMb'ӻ]+4cZhށ{?(EH;i󽼁\ 7B8UhJD$EB39<7 ]OGcWTFj\}F!!1X) 3dj0n6iiE̠)pyf',*~X`pk>4ٱ,YǑ]hTiMAɩ8 ٜXI)GW<|2'ﻘh#{{G@(!|`L-{Ԍ& +pZ{5m]h Z/?ܮM/3&W{gVՑYf$ iy"9VKr b^Zo YQ Gmuk?؞Bk)Ё[gѹݏm$}ƅ@cIfFFR詢'\6QJm53&s6ot.%))._8Si太?cλD_ *3SQW?YQ9*̄u|2!jnٺCvzW%ޡ|ww6%}e4eZ6fbr'OGzT=m|O۪J _^wUy'dϑfCL5>U $%rMV qD$-D!-4w`Ӻsɀ,Q i}J鱓" ͋;4oyzèɒK بUyǏrˮ{a[gkwXpW7_4R}%'6@kDf֜V&GShoA D-K(3 &2UVXO+ǬP$moV?Q.xo+RS/-QJF ǣĿMug2_i 0 'U[`}k6 Nvʷt3NX7"^$msJ(ߊ8/@+Fݜ B,}10\g^2^2Ҩvi k(j:IǔXeUJa;f?wA@a7y~1eQ9H`*YI{g V;`iP -.O]OSb3Np^:W63=(O8t-HA7)e%QYꈪNDNo>#GZ[屪wU+1R`ROW@k Q7rx@ PGon,识\fD/`\pT16} O%<X'd/?߬h'"쪭ӶbiJ90AWFXoZB&Ǔ\p3^Sgĸ@G*]433qMd6Mɩluf0 mt=+eT$c#@S.1Fq<Pc`G)!HDix=QUӤ]n7Q;ef@7ҟDO<"ݴ0tхʁh5T BWUEoxRnPT+" 6 1ge@n `]2Mpvzb#./系/56~tFEbhg#EKXEW* TQG\M6"ޫ)J=:\8h %(Z"EJJkUE͚$,ri6YVPxhApxܜ5-5Zl%:}\,,\D{7_On?/;ɢi{m_bW>Vd~27+\F Q,7Pg0"J aUY9 45ZH5p0Đ3QE7΍L:(H Ѣ JGM%0 C #6Q"_dxQB爱v:JH(Oh)rԢ Y42I I=T^A;eւ:+ iQc7ѣii@Q;15v34&Zʑ T 4*;^ mn 5zAi g"1ҐmOhHA& P*{CjA/7l^GjNG0p3۝ΥDPZOd.+%Q0R$I3sA*ÎytxplJ ;-Pt[Ȗzܥ*1UW"\J+AS۾=50PN'HLւs(z11Ocu"}Q9iO Ax~ zXam$/w]P~NWTjpq%f/%ANj("Ao 2U$QfӯAOwFzi<`W6Җk0HKm$rBJJif`VclP~`m3+O)B_#ֹBENaAu!RZjc{k}@25WH)F2>kac] 14Pzk *@ŝԪڅ'zQQ%Z#JXFj{%<`/6vO-[ȧUJ2"sa{sgR;ԵʚWo޿3֨o6KO$C"2 @RwHkB1 ǐdSB Oг?CQ L*vE9隈Xp`iDR2{IlfF&s26}ђ_#Hɾ~aFQHֈ>=F AFIIa`f$F6#)&aiIL-\pX|;sLI6{hE9 IsdLn6IƗOMVzxWAӣWpيpܨ]=f7)f_]r‹;o6~&' |t/GUEQ-ܫ tJN*/Qg℡'(9eThUL'薱E_:Sxrı|j֙5EY''e\Sֲ(5,C&|7R Ђu>3sn/PŐ%J#X TBr,߿L:0$N4.7C c}mςQ oc >և2 &9uGjj,fβs+[E_=28=N8 Ox(P sRLQ8|mM)[dDo*;,/CRc|H$ʅB/D w(~U *h ݱeː(@&A I浗(f++^9g 4a!JGʲ؊u'cu bf7k_ 2UAdː~/Q;1%xuPibEz!'7N\o{Zϖ2Z,ٟ`hRgK1@[r%KQ$!J3tԼK0^ˤ~R3ǫ;ХUl7XdkգL2]'cz z?o=xm_p9ʼn`lvDAnFfYTJ%aEƠHAˈs„: bksQ2k_#j]Mq"fqMV0Ё !3hD|ezWW''|/4_M仫Xa $)=\MzFS2] Cf2mq?δk_birJq't67GRp$ُLk 2*.QI1M`<jH`c۞:e@Ͳ1v6oXCf-Eh*6veJj_6ÀÓ?DӶb!rO2c p(RPCC`<_~u 20H wnA+"~F7 ˧?`}_䗹hr{Ͳ'[vŷq=.j}XNhf&|;ӿ󭽲xn-W5;'ϡHqhq9NY1.4ipg̙Wy&9sLFwJp`~s` ki*N^-9esZ%*y)νWsg9oJd(s<7௱ k o@l0MxnL!u{y_A$鄺G-gsH O&5aغDFIi<#WEpv3jA9HT'RA? P+07`xW{Cas˳wvR*Jٝ<`D\D0g3/ ];DIļBpU:.*g4qQ\B04Cp`+«>w A-*l4@"]ǔK(c/ 1~.1񆗆`lֳLxMy& @@%G^jV8TМL/ MdZyj dy)gN"J&mȕ* oIc_ [ѯ7ah꼦Dh9zR&$m0zJ6?"x~B ;C,&0 mUM>RԊ:rξj"wn׍^gPn%< xk㬛0ӌAA*ȵ nN,Sz@>RD#;ɾx[[b^^XˊY&=!g7.Ov2vAg,ůy$kygu:+~Pwt掠Kp4#ka6pb*nӵk~aZыޮV?ԠZUoIՓ.%'Lr.fVD))8%ƣ;~_OrVjb]D'oVĥ䄳avo_/mO8BLppē#7_QP}6 c:G3swi>ǃP P *vaE, }ἢBud.vcW'%u)V)Bqڜ_lՙ/OۮǿiK>cÀ,>Xi<;l+q̄ % m$|uh]hL^8?Xm9y yWޗ(Ʀ\IuAđ) X F XQIJ蛄+T܊x&ͭgzF_ Rŀ^''kٓB_9 "g$qnMrxȮ﫮jVUwqs7|j~grR%[U<%[{_Uw$ȕrXcY#6޹aXts>Nj Ŵ5.w2e*7Q/WOROt_ڑW82[ +;Ѯ|Y?ƒͿ N0#5W\R%݉>\R MYx Ɂ F;}*Zp,#Y/~KPRoCiB j嵤NYanjc9VHJİFV.F2G };-0^YyM"WF ̴T&lN,Ae >k5kpK/Ak OuO;p8\53LHHJ1pPC`E쓴G, cX b'ZbT2C]팚c6T dg>N Tӝ mhw):UN c.5*?uKĠ$>u;]Z1wnRlZА\Et{֍q1떊AI}Fv(R}p-YЪ֭ UtKdEH qy|ּW &; ,J>t|HuoiOHr͘$'df0f7?ʤE$DFno@om7P穨PژhD~ʬ"4Nsd)>@"L03 BAIa̖$s68/HX8pGV5Se5lE} <g%ݭ^ ೎ag#+mZv7shydtJ"yM5"Z%F`XV^9hI5MKc^qWՂ|o<]Yk?`tٸ{穣73Lx_h `s[6W>7 U_'Ӟ_Y՟ blo>>gLF 8ڼpt-(g!.W,CzF/w/bOxO!Ji^=g kԔ+gP3˴)Ļ{_|9Y[,(_Ets_"zuu,FKѵsFJYܸw6g+[%0Ѡ<*7<*Z]3j2$w{ܱ!BK(L<\ ׭xxzWP.~r&2дtIס\ WB3 9&ZԳCj/~I[r:v<*#i{8H΅3P(9MЭTZJʠ׭.QH&sSHI.}w ی;O^vק 71:A`nU^]ݻ3oaШi fg-;6e޺Ig ǶIYwB:lBΨ"c;kAw *Y^R:s 3Z1'EC_#{PF/ w Uw?za12sD"R"%V)&qSqs*YT:*WNt_3zb17Nm^H⥴a}D]ZH)TlBB )#)m@K02+=R=$iJE Sh8Nֻ" )z-v yy*-_"S^1aM/uI.tCO 60b@}Z*& }%Hc;ތV0ǯojN_bZ}+,,flhD<LS@4xSJe(Tyn2*# v*?#{Qj/uR3Ν7Cc$^:KPUt%.eiD7.J"%* HvN${qt3?aY_B:W (q`,CpycBK)SH#6k^* b;*ʮ3fv|Gh}kCbgO+ At:&xppJر]Fv%R*JeWT]?gh 37;jÈꈘFK Bk%%2N>վ߬zkKAR`4;'Gki$7$*~>P@,=M×pƳu)RϥK@\&D4s@ڦEɶiM4! ==Wf$ǛNd沇ԒD?&}g4̲K8]X27Q \C(o*rp5-c0^c9@gfq>G_יXyCAbźO.< fwDI8A>M׻|.w麾24Fh N@E!̂sQy&VAsi>A#>ѩz3Z=b-\yP< 3G}V'?z;t}xuUDRfܩU==WVu bCZ%28`a;PϐBDz)XhpuUO}P>kZMoh7Qx/UQ9$pc [BњI"RNh bv-k[;)Bƚ}!I3UKhj"k5@,殈ȖwS.oiJO`拨n}/WtMt[Zugϟ#~|t[9o "2~+p_[kd|Bo˿31şPt-$vf~'  J%UKj:P0;DV GEp S "C=+ Ti;ȜJ%5$<<Uz=_pǵ[?/fv3ۛo7g|񖅯5!]x I7EE5BJnţA)&AjIسaJfl|d,1n,9Q_'Vj˻ٓ~<qޏڤ",- X!7V[=.=23Qc{6Y"VP&;o7oU;f A#^t.5GQb1RSK=ȐW+bdv|O̭u;0HDjkA@,aRx bEjc*x R2cGa V:NmRAؙQ1~y9PW((,`"඀ˡӊ4ZkE+y6ȏ)#,(*'$ őlTąf6GBzBT'pIv LCf>R_2. =1m?)5%ЩYk,.QJF%&Ok$5}_w{`\ | (um7(8JS{G2 h> Z1z^O/XG޶N52MT&X&6  ęzNߚ# ߦ{R"ȯE3\i$T I!?)fͽLJ_'H%jsn%JJ1bQ? X;̏gJ,ϡ-6J{D,rQfq&xM{ o[KrǞ8׻¸3+rYcښC_F=Dle\<3A~1l>h^nyJ:W}+ZxYq K߬=Ӭo{uslH?c/-2TYr))orы3Z^'QIi܁Zuo_27PR{9ȂdP`X/iSB.㉚Ӟ C_+e5$zR[#j&dsNdHFx݉wNT/SV{rw]7sxgBgĞˢƦUJz{hLZR(ʂmk' '@UN9؊d(yX8=Y 6XnF.14rZDM*_h.L|77!0IFjc 9P(%4U@/U֡á @N"|aO~ޮJBl]-5jy&M+`Ai4m.cW RBҴ h YOwA-./vY'`W ԇZJ,j9~a=P,hq_ ԖJ) nJI@ DDFoIĸ(@X$B3vMe<~t _ ՂL~hE磙ۻ+[IOR`={8,vnʿ q%8aLlFXSS 8Q,t4VI"\ ssU$Owi +(ozXYtr!a%7};iq+)SQ8gCsߎIޙ87j~՞6IOΠHӛuͧkޡG(}zbw7_"ɿ~_|z30o߹MȺOYy;]f<-7Qo_3 |]Etľkf6qjןf_z_zmwiVе pn;΀w*b Z)9M,;JaҌ[$y#܉X2[ 딘L̯wu_R>z1N@7U?X7ПLo2Du%G Õ 0C`5c4<שv<u7;7 邻Yqz;'{ ^\:vNۺ&{ KCiGlHĪ WbENX+lJWJJ$/K,"4K2e\}Xji r*T_G ʑ!"V1͘<2(TK~q/.%?K8qtqzvzXH&zR)N:Yn;6LQy+t˞YVDلbPȇ;sG%-4 5cfnFrRʈFi0 D(F y$Y IPd ;7CCT<4() 5BZr ixjn4{UÅ/Y  S_rFd:DWyhr3bt`~wȒ0S=c=p E%h..D,F>Y;Hɻd@%*rQXw Ӻ . R(m CڿA?/T3B&RF@JRT'0Վk~ZS O]$q1CE#$)u˞  $NݫVYNo2r Z0*Nѭe˞B_TڒMcZBPmEUȧO?<<Ж@ j4DŽmL-O\8QD< OfYƱ4F81OP _’S%]XӗaI2" p4CI(Bl%"JHK0౰ɇ#Aq:%rBScdC8LV P/A! $.I>#\\I11A"Qő B'Y ~ `&1Id J,)::|'| a(BBjG*z xUޕe>ŚHspz J%ylT[6.AKNK"EljO`0ág%g=%U *B%gq#B2gfbZ䬁xӟwx1M# (5em>{ ~4\&sug?M˸p1QYA6ZKFju¹.3*Fqw6Th,۠J` *E#,yڱcyNu*; ezi-X$燭<{Fin.pe^z0g_[#1{7a .u|f:)E/%B')0n_WMqJ5ziV.S m/s3qHv}=I3&ǎn7Cճ|lךn ( 1U;'ہ[RXݶBgMp;LyY2GAVdݤ ɵ }ACD\7͊B4y=߇TUKxl߻Mnr0}r(`azMAc Xc5wy+Pi+ Ud~|V/_PI^K!G5 Q_ t< 8'1/Бe>5~],)7Q Q69/zѦ[DfqDyryUYha!'\\u?Y%ߞe@=ztUg`fe)ys}~l9?@!>E=wv)%w (Z ڊ6"Hv0 HHZh^v|4 X9lmpfAE)JZM +v]s%=8ByF䉌"6/i1֎}6dDBn<Dbu1]~W$hӾ vLJ(Bu%6p mQj2*U;;<{<|FLPMZxVtPLcZ`~l@:.1T}% i>w8KYQˌWҪQΉ SccNm+:u \Mï1Ƴ}?:v;wwi]%{m C6g;fL'aQPˆ Ѝ^1S`Ѭ+撷a cGv,8V}>}CJf$#cP<U'K7rYәކ`l?R,1\`Zs;q\ -Wx,!zjU(/ Fh Z#1BɃv5@+:~̚s(k[X7j{X/,>.Z&{O>OhnP`X oNjA\ImdE AD9*$%'<-!5m5_5g]*3;kK^,oqYj Y&OPl\=#ѼGg1IFv4&i~V܋K‹{Ha-je'dn,1aiq@O48=) H[zfdD~2'}&$5턭NO`mF <&PI %o64(3 ,wÉbhRCQyl fdy1ZZYX1 9LxP,!0@@ N/̊C^Fx5LJw+_M dGǃaػ<r,Z=RZxټYؤ';sەkE7Sw9, x 뢞b@_PNȭBLvomZ2ߙzo꣛!h{4f:ޔ 6m$#k!Ouߔ"9:|N~3&ު+Lv6Nj՗(\Rd7|[b&ku%˫M}Y+p[&4^MVcckY:֪$yo/Wx5x5O[X~;qK2'C8mlܞE7<^ۅ~E 0ղGa־D8 [^N)ExchgqR Pݢ[P4=m,ࡈ(kG&B JUW=э~!#UC82MZރіޛ/YJ |꯺I,|Q[iA9H[0l˝\ )8E Cj8+=xWlk5ǽ͛OsgJZ72핹,Oo 9`ZuxK5p`8(] "1X>%I`p1f =w8e?gI>peW`^[72>ůxy]z( =_ Pb^.g}NkS,͆%aqa+Ens3|Zm ʔ쪾5*X?DO)nuDЁl"á;ɶG=niVܹµfCq!˝$ 43XBDQ|("`"h5C>#V2$ 4>Ct#^cDFܻi}ڭm"҆[t`N5dXƼ{;+S&jwMSߖkE 5tyu'I=V냭'S@U#0RfeWkޝz *d^s'bZ[s2#9_L5Dw*?a!x]|}],H BA ht$X{tOo˃k$*]]5e<=즌A/2WT)uooto@!w] 8됨wsB w+$9a$Sfv`A YX#Ŀ L{|b{r?wAb^?O֯y:iBT`>R@4(cН ^%$qzJvR!@ ;oւH'x[ń0&:?D&<^ӓPY6rtW':dr猑~sOoSjŁ{ ^r[Ȥ݁ricۇ@=Y~^f(Lf' $(>!Dc'VTK8 3ZkG!!f 6}^NK8eq  dHiEb@θ6j)QO:4Kp?>7wۉ ZHp'rm ;JǏ-Q/,Nt^2%㎗3#{kyȝ9?@ZyHSy:Aw}p^Y;}7(u؋cNb_PZ@^n@L @yC'M?+\Sa:%K[ہ^ư1i|NFȹHtd~?#n>%;>|62)֔?5?tM:R/A!^9D vd Ƅ#3%v )Dc^I#{$[IzfӁ=yNC T$'nP @w>N2GBX8wG$-lӇаBkB1ό˃ U$Jy=W- r.E ʌ˻K+B#XȔ 8BMhJ -B`;;~b",7deT#֮~_yFZaY1, A+qՄ/cȌBY(~B,E2~Iٱ}̝~(g4ZuuR Fܣb(66#u`Fak`[0;r;P\<}'ևVFm gB3.PׁCDqO5Gy%RvJl>~Ë_ ~ɳuvB7^OWL^4(BW eR]}^ACYO&v΄dAf,CHpa3D&(e1iVH!Fi6`Bx l s׵A0O:Z9)e"zsve1BˈxQ*gʌf#(6)G$0 KmSGS"JIJRpE?IME`0 )R Gԛ)|qj(%<"X]J>O%7V6.3:t錏)H F 鉞2$0$$IèiFD0a3$%(sƹ&Aq'zn 2]6 #\Aj8(FJH s,AES s<8H*SHs^<9NaL1<٣M".uh.bn;j:,!0`^%<?lMy;AeZםaޙr|՛'K{W6}ULȥ̋Q"s3bi*!yKH)b5TC8#-d뚠5hQ%qr>`~$E~d?ȑ}(MZb [śWuu-vHbXGHp(;]Rkjׄ,'[|SgnUA KnP1j0) ؈L\ʑAjt=H{ 5;q~3ty3.%'jCχơQeOݱc/fM˖w|JgQ>]bUxKWd%_5tojT`^}9tƶ /򶎜Tם.BۨV:eA+H{ɿ8"tʀC(Mڅ~ДBEq`Dr5eŽjj4;tF7AWJA.vft0bˮva3ZQ{`Ij9 ωSgSLΈ gdNa7ӻcx56G2s7哏O\zF}]=ۺh֍~E뛪UMU\&t}%ٳKGO=]Mnv ~_F|u7gNnB(bD>}!Ed-d+F'-:;%oFe0g*Jȹ=%{~~eɣ} 8|H;.ׁ`k_Z _[$is2A*?ډ_MV)Yk;^7W.md\ ^04He/(ZZ||b9nikqbL%mU]W` LJ:Vr_nתsz2FOOݷ莴slZ+Z}N~u]95z & )oɞ^,s^o3gXBc-.,jɜ[u\tǻK6Q)W/#_Ux~QY 2Q]6b\f4g]ROD[9v15P'@/Q̔h5ЏC I-dsy"kln>T^M]H ^zmlW|t3NƖB_gnA *|,T/#ph#pջU"S|9 @0BtK^+N_ y*ɋsz:Ɉ_ťD`WwdKUyc;ΐ&FH1YqHeިu )×T\tsCqE: xbE-" p:GA8!` Aw`u%F C EЧottt-6?fG_Fy/:9:##|3@F5( nS;^ofvx ayTiz3#Aڜ֧>|;{hġg_^ϟ~ uJkX `'_8B\"ǩgyEJӂKe$O9`e&V&LSq_fwaqwˇj IAz,Jod>|026Kly&'8晼/ڝVH3ÆМe6J\ҨT֥cNaQJh)P (Qp cE@׶Iv[^fĩdpr 9Q-r&M­,\A<7"In.\˘b#gszclTcƣǀBv0֭dG3R_G/@J'9䴈BA\B p(E &C;# ϘiQPP ~i]1tHkUm Z7XWE2Xwhg<-࿳տf~yͱxft/u,nR;*-CO3,?\]NitZCԁ16+*+B<~1^*r?V_ \z*S;ʭYT̯ MN;׏m}O0{  FBaQ{X[ቓfo:TB'qRJRh1B8`cS(& =I ^G}J)^RM ʹ@BxCy*S2S1@Q.V YHdius% 5`< 9ڙw%ѽ7E )E;AVm$B~H'RzBxKY\WI!W HZzWDozv `?BwΩTfѭTmL۠[TFot+f-ua!'nE6Wd+&8? t+A)}6c:m73ݺ7"K6ݩVJ ؍j=DN9w\SL%R!Ӎ #PB H̓ 2Pmj{݊jdT+"hP”̋^QNA[mlT^]Z e*u*|V}o^u讱A 9q-)Î>=8=[ ژN3Aԩmn]XȉhMI>\A7Q%(A)}6g8QbFۢ[r&ZdSzzTSNT]wmm";)`'0*r'ЁRa'En XhJXNm $]QFfIpLXa@rfڛ82}eHThjs_@&`]f9!Ш_82hup[-sM攟n{ 9q-)FGUtS(&sJetJ NxrѶօؔf3tCqc@R1gln}Vᦪmn]XȉhMW|jMՆDL7L0.8S+bIyzzqg>[h_~B?/Ӌ &CM̯y*igdtyIًЬtLc8n|_%SL#o J":쪊Z gZRQydp&Y9 0oKX@p ͏$c SCHv{/}" ":l>e9ٚE,quo9BR!9Ixs"GHF&f@C*(TnAwZ!,I#'/KAxI9NE:M (ݩL Go68ʠM/}6hu(;X"Zue"څٔ$fQL"ʠU S޲IURF[XȉhMzrnj[=X]k0jru5槷+9*lc{5IHѶ܅lSJۺlrhG:$qSwU!k_rke-yC+MF:Ch3\&Krs݈㫳ɋ{xV _,]\C̷F1ی󌩮5ǘo3BsCE̷1یB;U31fri)/~jRu|qEn=p?ʼnOOsެ ,֛'Wq4[+A̠;ʽ/ڛp>t榪B'Jwz5Jq6k7S,9Ar|%a*/oVyy˛UsysMs.U@$O[ Qy%6bm7&MttnV5~s-]b-fJHzr _!lQi:+|H,]>J=bz&6:] wy ̛v`8@m?XMyJCF~:4bf/3AEhu'j Op3c{uҪ3 ew%~>Z9`˃W1 lO]!o:"$bwkd!z8͊1QK1V7z5y ],s?;;hs)K&N__Z;cB#"Ao7sd֒d6DlRrUތgSɘ`f@nj*6~(Ŝ68jj尲 GL~cR ԠS\&W;2:XYG@YJL? ߐ۟yN9L廨P>TOH-IC̳#qQsoQ_Ў|C`zhÖ^=RʖV#ŐI޼Xi_}W0^([J[P ˆ9?&-q´8@vp_,]~%isCjC)]c[н.B# my`7oJo5g.XFWdE+"Ȳ; kT"; , aHJNU-{PINTr`֨%]%xIEVI')͖WJ')&}W*x})xTGuu) ̻:ezE>}jaDZ0FyJ邅u$j.+f&Y{rU6(-8f[xc= bn:ZD#x_Q_W ep8?L5_I*e|t {pOvJ\0.`~:)uDќJo!~&k<}y}ټtazZYUg|z? 0B-x4V IՆg/ll6#CP]\ék=7"ITtwƸǫ;&8EԦCE <$\O (oC+Ս,7PPHY Z4csVN[ZFc;A>#>Tސ9`z?g*{9 ,̓]9AJ7hYCGW e& 7z+2T%h&VQ 6ڰ.d Z龚@`==Zf#>iCK-L~c JM (y=43T3}wRRKJTtIUw6f# m\-R!s==iѠziY廾!pFgI.lr'=ZUmLld<櫓KzoLHg+l;| ᪿ%?&[)g3T죅߼5j Zo駂"i>F&vA gb>d%{(b2vD~}q5!yi,}i9֊p4OQC#a{k3 P@d0-y35ouTZXEfB0#|6 k󄯟A&KRԆg|#RZ+;բ8PYl}DGsnW~4)q$&+S>tşbFkRH_˄dރH$H!IGe|R܈`=l+Pbah٫M~wvzZ)LYa qѥDXpڀJO@ :m51 *zdP} وr%4k7B^Dkxt:iι`U|glFR0$TVhy0 5yonQNF91hLؠd  `YxKVe\&ЎIe1Cw`T郊F>|gk6F!Ɖ,a6*HOL2Zu:(@|Ya),VLKH s12U#EDC@|$l%M,i#.>VZl)fin WϗBZRm/ Qh̄)DLg,`R.C, Jai@3qBY ZJQ f˪ʵUi љȒv[.Yb>ri}XDDbQL8ODL\>4m%M+T6İ (Q=yts=f/QW"!%ٮ޹bPO@->=7]V[?PؐiJ8`޾}Lfɦ'~󋈔]\:cAEh^+`nc9}'d xn;rū4C A<tw=2 ,T(K|W Ql.Q vܕ@!F0u6/޵s!5cjBBny`h۞-osTXgxtJ3R'j.Pla|7*5LG-'_U.\m)̬/Z1v͘ O?=9ѕT(#L BS_/9 0cVbS Z4f9M~p;CcZp&JU8.@zpWhZ JBT_i1D@^Co@1 ucFZy%@׈.H; _a"nj jL hA'sG=7»$6TD=gbU޸r0TT MRl,x,";$v Fu%Ԇ U`qc:<rN Zж mDSj:b`fV 0jٮp Xn]r'Řw9ڀ 1 2`HUYDQ&oL*/-{/El$c,?z#Rc{SH?{֍ K/SCҸ{6SJ63/ujqBL/pHQ)\H+LREth ՠժ]?\=Ln/ni~=&a>Ңw[!`ݷTQcʬ[OeN0:P ARD;WFwchu).MAbd wF1mhPh]S)3LrYT_Y~J|yN;{`I@}f&/-?'gd񵫁WʛKWϑEݞ6^%{K&|Qq]żu03jX}/4:)Ĵ nb%(]]czYmM u[$ S :!Ukx,UEs'|sb.ᖨ"]q}XB.]KJvȹ,%m{{2ӴJDRe.^`Rq!i'E(-tq͚R2ep 0D #L:t/_!%ͷ>j'l-](+eJ&z2 !&0i/<9f#'Z jC QIKMy CHNEe s]gmX|`Y,~fuZF&q=HM\0X*I5Z܎&<n>zY\12HV) &Oh胤;mXnVN[>YwGjoߍK<l\/?].7Q!)^ҟnnǃl>q/Jw?S, Y1 _4JfSx`Lj K<  jAHٳ!Jlq9ͱ!qJh TC|ArVK K`JvߎNsV5Q©H?7wt !y5b ~_L~&?b2PTiIn.-g}XHm+,vAlfTn{aqIEpdiCA^Q[lTL#qs1V%uQ 4@8lҖ6~A}Ro(:#-qPʝ0WN}6ax`cX;LƁT"@.JmV3_r%fjg叓nsɞV/=zA#Ý|l(DB!du.6z<~!!vp>u Q6sM˦eO 9&{ s\n swYB..Dg+Z%;IF/i!ؒ0zvy;sQϖp풭jxxw =l˱Oh|f*W,?'1J93F ^+]AT)숰6~wwkTgTth+KF%jϴ+H c)%y՝Nzt~Mj-ܝis[0ͺVE7h>jb^ƫxwbwk0Z%RŤ:TBeṲ̌ +f6֧qǏ~jl Eg(qa8N%Bnw ASlew mNOΝa8p=X(ʃ!PyMQ6sИKłIhP4k yǩLx (Z`F񟒙6RG~kێjQIn8¸ 03mM߿_ֵ8Spm,(%;ڎ*ޱP7O=tߚ+}=>;v3x`3wܴsMR^bL38%qKתݱu!H$i@ k4;G fc3J]KoV+ T[K8a*52GKZM0k\MQ;FW RDP0hN(JB$>"eV`>mһm"~ͬQgf{ Gj|rF,ӫ#tppjD\Q:m v|-(AJ_ Fy )H\\IIŞ6L5#Έz@&/K/>#|2W^$b[9yΧYʮ/_&lb\7/ٻX>L'H鋿s2~ ~#[Z1A*5G kyI' }:G:Arؕw$](jc4H(⏏C `Ç`XFB0Aɝfo; YAq)%(5b(yQS@ OPu~ͦ呩tߚ _1&`+*`:}c2iq꩛,eH_]ݼ-On5`߯~)&yb|6Vs}, *ԝ;r<e1vx&HÌDAA>۩((=F+~ k|^I9fs,O~cQDAPU6\I}u]lgW-Y @d~SΜΚN(C^.f9Ԕ6S:qpe0;]?S3eN i5. 5FU;l;紷3(ǜsxJhj7~aZ"rJ4B9L?2e-k߾]uR&S?8mk$aU:,S) H m-&:*`7obSbMs qxM0N!TpJA렃a(Eb8e,6EQ#gR,(aTJƽ@^c).@P/p*DM}={mS1//ރa4BHS)hB8-J"HYTiZ};YIQQZXo%Gѽ˩Z \7"\~[q1~geREA DT:( 8@Pi Bh4Z5eRR{w~16h6B@\VF"ͻ6Li/(848xڄϜA7?{ƍ K/' -/[I*:fK`06RHJv?=C8C`nyLj4@7 JK)f%&j{1Ns7<o«xH+ےgh=%fH&[ 9XZXS CNzl$e)PxQ\_J Ɂܚ2~9nW;.pqv~oyz8}ʀ{9}͝$gs:R\#H TІ*5B)!YpKL>raXv6{q`PQ F։y bH63Tu(_oqHTu' ؓEh)햬׉CfSY%1AZar@6Ðd'u u2;Apԅټ8S8hM!Zp'Xvc嗦)} ;ye]͒~uwI;ة8@Gyъo#户4^~;Cle(RQ˃0XulH6Pu10 )C_'KA-y{{/mŸk\|Q%IǶO YMay?_~܇c'L>,W, ӽ?-)ׅ%߿3xevlr⋻ .Qs2Qᄐ SQf1: u_fomg&g~tU⏡ ~Igiͻ/ﲒ'k2OfV: ¾I; R:ߖ+1!߾rٱ{zChgUc\>Hnx 9Y{q*H;bؕj^’RRL$*5hX{n%}!]$oLCz+Ij3s>ϑv٘Į%q?#q9BQ\Z}ymXo^ip9\7Ml^Ww`I`LBx.Csz=lҹ/-cKoVjxDگ\1wfH}bgwwbjq-bvʶk s4غfWaGw3k}Yl U.SLn_,fq+*λ; 1]*x/o<Kڶ Uu2~r~y[<%=KcKmN]#a6$՗,v.ӳvDAvMwl=ӧ'i0ǏvŪQ$FkC`0zqY'6~FV@p8 I;X/:N( >V+LSN c>HB }he?D1"D#A(9.\Vⳮww%WjJ\WJHkOo Nei5s64kHbs) V?J῏w׫tϷ_yD YL`b'Y%<:J(R8՝qqG5׸/N߭NKNͰ)LwʑR@&q#OTaTa-4)8Qib5 zarֳ 筳sdX51oʳOVa*2e܊(lzl.`Rs kzRcIbeD1^FF< jL~q3mZz5.ɽ@,zo/dyspr_~yJdA*!kJ`ya\D.0GxJ{yzR,/ٞj}K?GtjSQ:DM5zoxs>a3$B2k}VYUzOH]RTtsƥxٽٸ`hs\)V#/І5uIy"UsXܩ){ŝ 9Z'"'7ֲ?c<^[g5G7U-BS#{,BcVxn݋l˞|.JQ,vc&4o^`U,0_N( sdyJ֒15 _JUe=C )u **lJP<Զ9skyml?."c8kT2)cFn_68j0XvJ)Rl<-c6${R4λ̉:JM;_UKmZ<0o DVyyFaa2K?]9EN5xDM uvr (>(uB/w{Mx&mg7.n>N`e)~ݪ2Isri֭U5 /ɏ7o3 ?gwswL7uyT1}v=d "uϳ^e<[tO ; 慩fsǶ5S:y-=vzW*k#! ReX7X.XP ʈNhU[~iD-8zICB.!2tg% [n6X9ݗDxc-xe[ Ȕ&VY7Fu Š脶Qźw)p 6f݂ZֺŐhLqmXbPFtBۨbڻ8T4<в- \DdJ߃g8B1(#:mTnmq*Ic-xe[ C_MkmL1( lR̴d>Tjt4 \DCdJ#=&(DFH(eD'*֭ANTƬ[@Z2p )Ts $:mZ@ JN`;+jn-m"HE4H4|uSHo1@ Jۨ3m1\jҜu hi #?kEǼkhsρ& zGc e5touޡJ1r̭ V#\<1/\0"Rv>oעsƜ (e]kMɘ3DՄbc\1$&`"D^ Ǽ1o &Hzww&1<Ƙcj.h[cBs-Ƙ1%11S`A{cT4xzb1!ƌdw1fcc9&0Jdb=cc9&(cX11U81fεcc9&cEcy18_YPNc \6cB2=ƘsTMPi3, cy1IN'j4Ƙj1f-cc9&hPb̺* cy1fu1Dxy1){c&c6ƘsTMD.Lcy1y]-Ƙcc150w1,\11U½1gA9Ƙs\ߛ9|y</?5{1]W^RNEɔ L-vrPϳo73(}b"}fg[dEtjSQ: j5X8q^oPXfs(Y"HlxLeހ _~\Le&!&os癐-8{~>^Vdyz9L>B0'2}`z Y:?CDw)B`΃0ԪML`e$jXabvl %!ԯK Yo>e> 7dsfq!?7<8If殨a!7eM5&K&jf7MMU7o>;ycE净R,"%͔0q^S`&NDyahmJ?M%1\RXF0>aL9),]"2$B(Y!҈Dc%M*5@GSb<i1?{Ƒ_־ݡQ2`vc]l!Ӗ% I!)C8aKaOuzS s] QJ5K;L&mKf33gd,(0Gց[dyR+0"F't/TD/{A CgaAi4$q!@{H"XkB1gȗ|kܓc!*T2E"%q(z$lc!'0bca*ǰdWBTSio_>/S{B`0DS@FbceEQCqյuyF}e $JD%h.6LdV5+xY_φ}/b#AI^|ص 7Ȳ^w7K޼dAIvN_MnkC9A9wX^~AdAhqWZ]#`8秛K}Gu9˿dwB*XuiIx4@0t/lz8١dYnLšg T~VvޘZq&r9)c\FWڭAP{}^=x"nWq?B  ߷֠xKܐ Hꏽv}XǸ8~=RoZՍ[n\6 4wi.o=rߵm׽l B9D;opƞ*f)o'c .clH|z_\vYt\b*u9>zRx /;lK߇<2N2(tlm+*[aAqzk/ƷyˬDKg\ IĢ! 񼔡erPRLFC`Bͪ>zN8tѕW.Qu)eDx7pmgYA֜-$sцU}hg5ig謰j |+J\+/16+$翓[(w!v!NTPs^7ȚtRoH>zy-0֮P}f*d+-ֲ<|6Y Lek8H<h:>d@]d͠d`XY'dl,yt V7ѼT,ekY B=bNj3SR h*P[lמb3!ZWЪZhՍVzց}ڒU>`-EעZ,xC*U[hB%5 v /E|>/dPzGV>ELXLCjv'js|'P-`  QS.rkyzӂ$_g֒!|G`ZZ Џ~BPK~^t{wF|.%CP >ЃetfN.y&BA0'JHi@=$iOKddRAg,I> hu{~5])}7ZIJٞOpiNyZɁ#+XaUN0% Bs,爁fsvqU:tch:*;42=k .ίtk3҂^D ߳>s|@ٙi#a|Ԑ޾"uo緵jc5x~zܶÉ5VJNRڿĪ*xƱq؝尗sT_&@5gu4>eCw@Nag(83%<.&ؽ|(fXDjiyfK˝inK.hu61FU{$ɸ )KH ͅ!bƨ?$nK|ѡsHےBd}XՁuےnKےǦ5%dN(2M0w?Xi{4Zw6y2j8oi/}AZ%`,_ wǩ˸BN2qr&Y9˙8gL+֩HmJ{H0 Ő ;;S`&yUɴgLxAx1 h˃>;ahZT.29k? oFL-EޟJapls=$Hy+;xe|gHF-*-#\_|aef!J F0VӅVHTa!݅Ȝ%E B$DMHJE \y) uXhD)^ksCZi=YϩcСiq& (WF "֥ИxR(%}:T =isumKbA^fȑV95#JkI# VZ7'M!)cWϤ2eAaˣxQ0CN1+V5 'ڣIx 3'+,(yʓ@ -\|NQDmf/ѻyYƬC2^e %ݜ29]e<h OeR!Y'F,H@vȜ;k 0.AMq j)SюA z[p-벦NܘaL (JJSa@US]wGmUAXt;tqbt(wA C]iu&Rungz벖DtK;ڑ)eE/#V?F$?u[W<[G},ʤ4 9 G*qXi< hwnNA.P ,PSYIJ~جD"ɧ4z CSJk0 ]T,$O!y-T N8[3]j-TM~&C2[wgf3g JӐrL4^y7%LS%5Tk8*!k)(牉: cs4C5Q%95z^bߕ% VE~l[quf#EU(gCA Ώ-mVSc x'>yD6]$L&XAhlnH( 1Ź+Әb54G6>K}MX(o&mfWQ^v4XOc<@* ' I}<蘉)1BK#!f{ZI[on7r0G kj}WǪ[j57H:1=qAYb 'r?l|$w>{>T]'}:8:2r߫] l>' ApMQ금$rNg4A倩B̌ef ZJ^|E^.h:Q X &qѢ|say=CQ+ͭ>,cd.lAI_pY|0-;7ɗd([dy.E[Ւ,ݒ_@iŪbUT$s(= )N6 kAN <}ZA @ %.J0䴌}=`'2 3kSBq0ːJ' Fq$gj vd4e(lYѢh8 B!Tf5+ Pբ^ >6TqL82cO"Y z٥?zwWo" W]|tJ]ۗlY"}G~i œ_(_/h, Y'$4T838s` XVJ09 1J$']ږ$+MdgO !Ѱr /-ԅ_D}zCFE)Ln/.q\ahc/ǯ<<'zlK\]YRj8 U:0Zo .MKE+gN~y_"^N|̳ݧG$ٓJm/3UWw>PnLbXۄvA؎\F6Q׺]n%yZ$Sd^̵f\(kaTi#\h+EZgTFZUќ5n>њ2!c 4W3hEKHNʨB(D "1@FVuFKXiLo7:Q4,w%ќy+ FЛR+ؠS#Iib"Vu}}F%6;_w"Oq9[VN33[Fqz;}WMoNOUP-7%#Dm;GYDjEҡ xhbĠq6 T ~a7(٬76X{b7Mh vv߃݈qȉl1yْ~v״29w}自[V|~!-3<[Z]RLcJp.)ҥaQ0qDjCkpҳyqH瓟3#I6s1R,t#vغۛ ϛ o)l%<M-?5oRl N~j'(I#:lkZ;=8tOI/^?_RVjSj.yXGuSdvBB2hߦ@ieڷbaLJmzulF= 歝IHǼ9%"0omGU5mOǼU_.dtyk!Q9hN}! ,ZZ1G;f}+<m7p*iei˷O]~3p $$[ptds퀻uqYt6C *j %wS jnR[{3ZA6S46Z&?^Q|tz5ksqYcƞTNg8Z_-vOT+bOfv\v!܋}'?}ۊsau݊ sqW,`V5TE+;iOYsiCg\Kɹ]JZa:IpX\\q8eZk\7#?Irt!/\Et5X[.RT'w6pWκkMn]h WD}Ĺnt!cL$99Zn/BǚafPX1~tPq,W'{Be[@ .ti%.mX 4<gFpF ~\3>[b\' RZӜzKa7?f'm}'jpdiUWôW?$5?=ҠGT܎mQ}c\#ۺ+j #jsd7Hz"@FW6|6(+[t~}A#1E:^'||hǼݺgՓ|J']|=471|=8ƭfr2i!w } қAԻ\wUp;hW^0Us?-l95 &Xi%_5Ja[K9O yf}@#4<:Njn6,V&b,M芘ޞ)/*mÈ8/2R# ptRGqw%P"8~n/OVFd>BϬ@$"3e4{8}Ç}eՅ@KXu,@${5R$~/ u?H}?DRP$/aHjme ^>gR8H,J|u$n}JV_,a)H_zpե`a\=_Ͼԥ<Z9uLw97UZ8N:cd:NI2cdey6IۘEo.W|*(޼.byq''>b>0s2+VlHG?mfS}cUxtGb#0_b6ov}vx xh Ff\ҍy`h( wf(M*P2YL_arp%VǢ] *u‘ۼ>Nүh$eWѯK%. zKyXu#EEtOR|ݭG%С}K *I->du[\Yn6#^%8Tб_=P3߫< IKi.O#`.>4OQtcЖ wmzIܾw}9&/9p` /'jKdS=$!EJ3HTU])2SvR^*X]px^# AX]awe)F?uʰ3wAZ2~߃=&J8]j 餵ejL N#f Zߨ`7<.aBnc,eLH%>D6 # e$BTL&CJ?[AP?pWvg7V˕mYJ -@t1"=w4PUFdJ# G:c!l5CZ vm_NH`Rsȡa^k;]Yɇ奫EQzӘok뮟 On&#(`HZヲj]iLc)JWݺ}.[V 8X끣=;Pb ͼ~Ò{&j㈌y͞#W,}Cimk^`z`Ε蛗SFyh?69oMlxܵ%ǁp\SLrqJ5<83{r<̳[܀0 ;AYJIAw6RLEL>5sZ‹rx$4O;8{?w ٪>@.J.e-R5)N`+Uٍ<)A0|xj6 tk[ؓT]A=q#f6$UnÆc *ofQW;ADXhAӎiYeg=*z50zYs_,v\"0rTWZ񁳜GwcqE'I)4Y9()P\AuP;͹?lY{nΗz:ejW-ʬ&)ػchjs1^-ȳ ʂ<+ #arɤ̠:<$ch_ϞY=WSN泥O77WU v~Arv6*'*\ޔz-?̫gЯEg˷ESo+z]{2^ݤVWP7㵂"am֡mZQicږ=(Zu#j(/tCZ,på 4eqtO*S/f>`'W8.JIr;IޤvoiN|~}m~ $2Γt/t4ףOTourVT\U. q{xQM/Wdn'+\ cC-OrB 015Ƙ`~pgKbK)qbə}f Kj3!MMd.礥 mI0d$7M{i6cZԏk+r]Ek%`dOK)/W0vhƺ<\{ÁRqg1@R̛"U#xIzU="' +'؜h1!n Y94%zV./~^)r[[_ӣxH/ahL?OYRZ&XXo?/p3>L ߜ_~3/E) ۾Cfv?7U#n6#ÙVCjph4&Ȭ]-Lkp``pbRJ,»ovۘ4媃$"Ib#=]%SNiDd|`\-\LP(O iH,A[QiD-%+"Jq1)ԐM)oMHɐ ғ S!HT^9J C kS;~ dx!٤Q}{%a#j*&'[|s3 r;4y{RNqey'hWT.)Z= m.b (vuP &R<r%)OͥT^7{2݅L`x~K;A}oh/ܠ:9lȟFjeN z;/钜lމTb`f按7_N Tɮ~~ ۘm+s]"P3Eص$1;/}iZ~0Ric>ϼ)tmpc3ڱS04;:5{(bqW.~W:"znׂہ1/d4\ϠjP_ԗS/r9w#5/gkg0ӗm4L 8f IcAgM~<8OsjP٣hqonIDiJ; `WF~-S-ݝEpV J0i u-N fAzu2<6.ہxLðxxH JJ7!mkv)S b%E%Fg3^B ݼcV{:kTG}-rmkn# 1\Lݘ, {hXp`9š龳k;hd -84ǍV;/u`4|ùTh0vH{o6c[ 1= s1DCLVKN}̞i+;3.}z]׾*4 ?ю`tmU=F_ ɦvKSNTfh߮~UI2RIBhqO(Ƹ*]n.Jtsy1/L}1mԂohǚZJ%KW;'Ѭw4#OE#0zAQ}%Yz1nAV륑zME?]`+  vC[,ǗK1VYBUn18pNaf1:}FqKøRzR4i?$9?smtVʶY}[~NdX)ceD2[OC-*m3+)(>SX%mԠkC*UIz1X|Ui"mFVӷ\, Nb>a, ?`Z[SժhMooHMiF+gŵ>+YݵRAe6i$,;TGdWIIAx(ҮdK>|C:dC%' pO֛591ըOks=3Zq Rq& (Ng-02&qZ|rM[bfQ0{ʭ^GɁ&hv: dh>ͭ#$o6l26IFDžO c`` _,!Fog22 WqF8ΕEk.9ӜB'AInYT"Mʤ#^=]H7weI b¶eV(Rvpk6 J>4giS΀O|I3p{eɬE0nSV ;y1o߷9sĖE#~gZ9(yB4пވW/zbr4Ғv~eL͕vͳjX\qqT' )pC1ʤiZ&("$ %ex2rENMEMؑtKQ<>Y9]k]rdQ[Nsy*KF)W4!Q6D(gF^Ev!gB ]dzFs_.\us}g:=O64+RÈc$ΪyЏTZm ,Z0zOsʘpd h2civFBOIEsIEp#730~ѭRF*liyIGPbQs:okAXkg{n(`)d2R/ELvѡdJIfșAK#* $q浩m܆ %'Bb3_Hf%:袖<*} 5&/A.^2üB̴@6tnSf\F2-^8GQRfF`IJ0u.`SM8b`nP\ FƌwFFt߸27~4PʀCEbwGV&`| y>ܦe:0Z1yZE]\k{u͝+{v:ĻdWhwgGKUg-Kͦglz?>i|~"6B?.z837֓=[F=/nbCj5.M:2S#FߚgbRkM&Glԭ&43~Ԉa%$R TH)0kLmۓ'%2C@Sثj.j@@*_;p!RgD ΦaP7{4Dt=FU(X39\\["JncJ%ԘPoV'´ 8K?ajnS$:hXҔ`&v̞MI {\}ֳC}ycij7>;(t=$ƃbX(}{<}FG8m,#< Hsf6y7qHEN1ŕ11 f+3&Z'+ҖWi+rt ^ncwxE%{K$ S!-RDx"͹c;H6QJU&İ2\qfXQ$<1h *b$&0 XG%\)n)Rd 7wnWJNV61ŏ |.1\zMtzCF*&qnb%N+:^89&Ĝ\.qBs"KGZ-=#v#QAw;kc+,a!S80#Rq28e bs^Hp(*1i(lZV*( hTMfM_ļwqMyj(|1ާ8瞇s;RqenZЕVDQjBc7h9ҼH38ebtn;u 'z NӴ ;U71[;VHI(ɔP\06\f_VBoi#DDD v;bRZXqj-b<`d\j@oklvG\JϏ{,X3y|7,w-ff]ɚihz5[X5flOj˧\B`ԺG@ !-kJ`R5>ևqV=TX )lQlZYW&D(7}mN:KBOummH-[q#fÛ#q]RDcJlj^1fS;J3|Ҏ %o ه7; bO.@zFKffj }Fgus>vswh|2vГf^LJ/,.&N}o_'tȢ{n'u#[y!p1IVJ*KUS[%MWeAGiF󏾷#v?B+8t=h;4"lFd}yWHD~{>gov1[ "ئ]z~upxQpJeQ˴<1yӫRt/1Rqxqb:ikiJKH}f:_gt|$qv`C7HY]7 eD7OFObzP7KY!v?-طI"y۵HE?aww7)ǐro1].|RpXMHdcgP^( LCKqv(/' [ uԣd)q49`P*Qb , ,'EB> ?D2؂3(҇U S8p/]W^RڢwuՖ[u8h] #;+]U.1yKr$l:C q$J0]G?__0kT B! F 4TqomQvtsfn3h4:  {~}ۂunrPu>MӡWwӜfnMw^᥄)vSq5*SG5BDǞpLg?c>4 Z^P\ЎB J? RNRj`ÒDKD}8B6̌}j`C 4p<IP(6|E Y!}~~x]vI{_[r s) #kW   n*ԘI2B$ܢ9Aś%bqi<(у;dv#dPuFX_N$\M+JPkێ1ƛŐ:D4 TE;q; PQ} >\iMb1U4sgc̮Ӝy^s%;_!:]nM?2nCO,2e)KLYe3K_>J`kɈuTb,cu fim8 N+p |SS ;ǻٴŭ^V:лOrg ףOz̳_{lߣw4J> 9;&Q11 Kʉ9 QkO1XA  " cAY`XZۇUCC4&f&I|^)8RT8 p/08FRǩCyc.y +t11՘z%V@laF 'R{ִfNma5(2"~ I0z\F|Pc) w4P 9G5"rR,F T#iJbEdS0C "XÉQI$@Zg.m홂*mR48k4g7Eyae%)j:(zth8NZސy ʉuE#q9el F 8:vj~X8+¥؀4=nɂhUl;,{_k_3D^_i' Nsb<:kqzӷĦצSʴ=âk.)>ǁs fE+OU\s8t.2#Ah4zdcA<!jP*ϼDTII5aP+UH2>*Tp ZM׿,~dg@Xn6֖LMz-acD ý⁧VZKw\X I!0 Ə/wSߣ4&OZ£ؚnڅo%{*ȐQ!,S=1Q60dCŏt/?%͛bL%!)CVqJp*@BWPY0G>NCªclX䌕E0ZILY\׆}2: 0|,o RGVNiBc)vABBϻ* P$FʘRL`poꯉӺB(qDx2ye˦76 3a`jkb . h`]K2n?SCx6 &K0H:A0tRz$~V ;hH]#j&^Y"Aﯶ0ffoRؚk{bdųR_,גۭJx(2&ɐC$)|Qd|ZVKAM.Xi㙊vUv{XiGշ-Hԃӱ=`Xjv~F[&㷧HWâL[,(-5Y;}X䷯WU_g/?l.޾vX\@Nxٯ_m=UdEVI둷룲7[!r̈́(9T_OYv oN&f_Y)u! rvg{6ae0K`Qr@s 0$ 9ڝ/ 㔖hNrd(<}yD>Ϋ?V~ɰl]QK|~pK%$ c[1ߖ4QQhvUQF?ޞn/Ŭm𖼂ypoW~䅼]{s%dCy8̈́EHl^!7[͂ta&Нݓb;1nՌq6VW'2&xْ4lIvfKHZȆTֽk% lYec5Aj\H Q'C-OaMվ>gPKx rYr}큃vĜRo H#&jzn:J^nf ~1=ȓjלYͳ4]ɵ I֨^sZp"|zp69Kzib dd;XX%%՗؜4 dİcHIw;XҸ,Yǽs›3®Aof79#vpf͘Wtl4V(KA5j2p0PǫycR~חB+$ѸĄy^f.="\+YiuEiJ_J{Şw<3n0I jc^BJ*JO$q'nV)=k)3cjHI G@|9%#qSVkEOղ)01ZGֳ@m6rѴwrm+$yoy}D4[PmooW-i%3Z2W3>{Kot;`J2Gc V4K@fQ֖AZj IP-Dj FI|#nܬ;հ\ϳi_XG6,pF}ՎH -Jj3lQFW҇C_,x_$)GS) =xCnMYA+h<͡iŁ k:kLⅢ{F񼂃>1'pB8-\q<]1VB,{ u)EJ Hgaz(+[J4𕰉N8y;(f7JTZ2d4"dDYq_]YyryV\Ec|x9'C<>.Ƹ>/>/3:7?uڅ83Dbq$ i1%y93ѱ"jqӞYv3F%6ă\An6g-e k6mMv]=*^S(kmCpx+j5 ^:%]1@0" :Q]W├S!> \MǕqua\S;rX{dctXgbv8F[3p- Gv[i+>BīzɥjZ3]#j2%Z`6@#Q^]}-<ʺ XHV ,Xc(1&nyVp 3D B]3"G-0T!J_V{NB)k8c0d fvp3=p|:̐WG<3M3kc#J vRt \%+,8[@e5g X0,kp8e88)1>pМPX= ]8c86;Q#@p :f>]yCxuAh2kbcֶM6n/O3D2aYsYӸ7|<l )+MM2if8ک^dzQ#G8ddF =kYzWgѠ9Zf4~X.b4cY+V%AlS€ gq,p,(k}*L2?;'{g +BS-wߴMD+ wp.E[?ncJM132sEos綹"0ޅ]D*OJsVTVF%"ixBPa0.d-w̹R<̤n[ ДY.u[!KY (,E ɔed..ZݚNkn*y)zc.1g81Zy ]RQ 8wmjn::ёu2kOGù zmf&NQ-@ N'Q$ &(M;E ^u=XfrqRXm΃I5z?|@e[/N\9/4>|vVWy$Ik5XDZ6wׄtVtWQDZUM9CN++6lٹ[qpu%4]iq.^w׸u׸*WkRg4%Q652]ًrU^j!n)}JhiҸ{Ϥ"FCuWm+mMXRLˮFdVq6(Tg~܅h*XH\S(Ü3/)d1XZcA&剸`LWLUr0y7(,L ")eʌu5mf;erXA#ֆBs$Lߨ25as6+*Ϗߵn&&ҭ9dJPsG٧Uӈ[MtfBEK}romԴ6jЏ3j y (g\DZy\t,Z2-n!P̉Vi\xSTՔj"at(Qv)><@i!S~δ14ӴFеNbM9a#rjjq\,k}u|}ʴF'{>Uk|5eYaeqAg%5 ,};>t^ ==?lŘS_LwvLs SkKp$V0ajڀfT:t:Ժ?Im Cf֞V"˳~(ѺcYUJ'gƵ e$s#ˌtM[.tR\aU"ro.d4 o0mĥzIENΝ`aXIð;X_oGXAn}3HJr4#Lr$F4Jwʿ-e St|%lZ1@Y|jЎ 吱"9R+UD7n|zX3֮\3؉%(jXM+E@q8jhU5~j.Uv elIVMLQCYa #NBV>) SBw r}XYU+(}sZP-d?]{!>iA>b>3Fv3#}AhJ 4g3Ʌ$v%}+'$F=oߴŨ;Re?u t:Sgz>ퟴІ8EI zlla(\4l-: RH".0nAkʄSKj=o`FU~hrmAQ\ eT+d%:(K.$ث tTu`RNNL` pkZ"̃R0JaTToH JB7wOSf!0d!'|X`50ї]:uҌZO-hp>Jx  cfJx]e #ETW/iP+W@os)4SLVB'X,M4kE|\NR0 LK}oĸ m|X\ Ջ9^aڏ)3>y?:ݚf}K8 |`gr@t9@@3 և Tb.EkGt1׫'xhv:q-!VAM@6i'N]Io_Mc?MۻG,鍋;g.;5ҳG{;'~Ϸcǧa ۞;w^ 忯#Φ`ŋlr_ߵgv+Ut͘ =Ub{ͯؾvt6D}o“&2u1Ϲ /.Rv?_#\ȧ/.dOs\9~h:l}1t.Bhx]?vX#ڳQ A_bg\Go/xTl0 9u1G=o __[b|/;7Obܚ(FMwg)%Z=͢,Z=͢Ӫhu'K7<)O[(oL\;E@k1b}A{ׇ!3!Z+҈ݚHittqSԔ 4&6mӜHDRv i|H*0Ɓ4q iil#5֩Բ֨8gD;J6qFȕ^v[hDv`?֨8 ;e$ DE)`@@fOVr#m:jR^%uyf'|ns yyeUj$w?T?(riI0^-f1+SL_ѡ{uL"0pZ?D_9!@Z?~w J-g2c!d|]R" T\ QVIˁ-#TTpO ?p&8SC^^l=`[pe.[qjȳE/lQ6ɲkL]愚0!DHPj4rTMT7W<MTTIVڕPEtjw|sI yeTiF+EQ5@x^S,5\CD|XFQh!"08%%!}JGe3B${C~ʁ5'iJOPJ%ˏ:^x1[hy׻Nൄ;* )LrO;8M晑04y57MΦu1ʿi)[d"Y>֒S(8X?ZgGl[km;d[r2&(0r1YcjahUj\& 0c )>Ib`YyUBapXcCN]^Qk}ֱZʻVc !I.xD Q`ߕNS 9}Ud.}FQrB ku;+%6 5p!92dDXO s gJ"yyF Git'^@")vnws9YǾS:sJ@/?+k9R]tcݗ%zXLJ0.e71Fae4hAfflsI!3='=XhT)CnLUAÖȹFanTM&Vb8LrP9&/;tܽozo+yamzZZPq8" @ "wDM-“1r8}sA)Ub4k[;ur( հwk}v 7B!5lW>tp³zR4|`rIQk};ǍtP0Z!5J(@aԽ_kfY9PtpF\3bB53JַP5+PAip0b?M;\VF?S21;:Of8./ ZGe**/n&;JRIf4w4i)S L0:*-c%4; c܋)EA!)VuPdEA}Pda:[%9ܐd(J8y\cQrGkIe6+DOJϭ^PF%ʄwdף,v\Ø޾uTԛ x#i|?Bݜ"ڲȹDP4MRrFraocg~IfR N1le2< } Va+ðaʰ 7ֵ3HSYS}e@B4Bx WMU9A|Td{Feeh˦u =\ v | CV<z|uo?{CMsmxD#@eXlה{mVX2 ̲N4ZDhW`{3D%]` [0/׮:2Jf~vvgG%>Eǧ?LgZQ #x~oO+%m]~x`-Y QWvrO_ bP[|g/7_$H7 SoSg3u&YNuXIRQqkR]p\}E3N:to2/(Yiei}QdZw̵n,?.K `Dgn%Z'N4`VVi]hB0EV r=bĴ;R¦ۻ@ L;.7;,k'˚NWνP&O:BComGT.ZVH3Lg#Xkk1Td1U\ίl~WfA,upsU'_o]RH(&ʅt<-}۵~:ہuka}а6E}XXҾ5XXlr5. &}X7s$=L !rLA(tr6H (e]Kca.@#F,E2M :/!2Z$MMq,s[ڻ}~^GZZ ZejEKࠁA gw30;;)h,Ř1}(xc (S.^nɲ!2Ym0 긑!W'pr,Ҕb.`!AJ8LT %p8-ݾ]-˶`&ʀ!2rfX3|b4ʛ9y)E k0 ]7لq\#;]L4(eԍ8g3m_zSEJ%,>Z ڵ\VKF.iޮWr y0T((j$8* qn[7Y/^>3/bLs2L4"ACaƁPI!v2:%L00sxf78iʌCd{+wݡU{3'qNbOV;@Jmb!`bPPS (uC$`DQ VLJhUZ1TSs7gZ{㺑_i@>`&;Le xyIc[i'~N]=n{g#HlESȪ"!ڰK.3pLr 4sh+/k_EUm TIì񒹤d:hڍu##wLČFl%} &rd)Ot׈mXq^M?i{TD4CRDcE8v,!Newؒ9: 0虓hNVGN-} :r(7yPnqPL aJ`V8Ob2xҍVՅt&dI U\՜-!Sboʙ'jk.@02:uW(OprNa- *gA`؊ ܝE4uɴO9+m˾@"q0:UKPn(jgXSvՈbYo(g 0!f<y l$Be]>=*^qT[)'?R1iz|x;b'ab3J9\ȔoWnaPA,(JI2̮[gM}>ֿdS`K3@,0mq':xmb^FD͒hd\FO \*f2=1z}˄mf31)W8A,ד5ҠTofb  %cdPQ)t tniP6$W!M ltHUnaPMH}!|(,&[&u#LYҏxf(Ep2yXo5Q4@XvZ*%,s3Y"-3 >M:z3xWEMt-flY=2~+z.{(+\c YrXwAww-)q5cbE )Za;jLx%$!'( jf@T IP4+Gq^JqJg [VFAf>,wjc[R48n Vk\:mo)iڻ^{#(Wt葄VRjoz[v n4[\̕v+f$0&IǕe'^rZq1)&yzcgiXaFG% .FIҠ:Kg#LS{/f2͡et:!٘QJ+4= HaU0aj#}.+t^A}1‘}L|8 WlB Lŵq{(9ܡ~mHOxLrNy;n ywYS덖- <nL:e,G=V_ (ck [? fMi4FJYT9DDl8Xn*E*[T[z{e $dj&p{5~?nDv$ 9tfpa˺CeY[K5f}._Y/UzDtazn rpi$}dI%)b:Aj͹Z y^8[SuAՄt UwSÿ#=C?W fwu/ixsnzu~h"Ż|ޜZ4pūh_mX{vM^h[Ui~j G^, =׊wn^tٽS*{ji>>扃bȕOgԁ+II:+|7ٿ뛋X9`-dǵtl@~5OLoL!sfԫş650omRPkzh{~_m49v42C=憬(ɉiuԆt`iH5tUBN0K%72 oe|Ś񄠇S#"hL>_6+Lxf0| b?\@Ϲt~R(vH`r3im;|'(Ux$D*p/6|5ۓVv&XbǸbx=q#՞dU~RoiPe!Y *^s Q,4c6 )Ga&QORGϕF{gYFrF%s2ڀ!IF[Zy(pCUmT-eMҘ|n`tϜ3܀*FUGoqK U`Xa6:s]-y Y "C654 5BXR~{wB 8>!m_cL ;׵հ Ri J/l9s V ^p+-5ri/$1d hDwVT Ԫ\i홧>ipPN%{Is7~;1شCr[9ûˎg-pM[jz7׻t2.zh*}aWŘ)%TGU)lss}N*N^a.e ʘ]F1unˡ7>Ef }.B5lw\/sjǏm2Y79|уwMs`F?'^OI)܄]`-yx;/5!VAcI(9#xRo~;=acGzׇwW1#!IhmV3]Tkz⥮sv|ԷxuoULa/׻7 2hS9XUSd#Z*uXLSC 5-Ú:DetN.2B'*lYL>!ts-ۋOKf d/#$3t=4dcu#}$SZ6I'!=6s9o{KR0V`tal8t4c,Y+3\k? 9ad٬yQ. 5>+ /ׯ_C6Bpj}]Tw)ۓm~1C' ٭.hzarmaԛ ķ *ܬ⻔ns{um}8}r۔/RĂ7şSsæOibS s=;?{m"D| Sټ_1SVʱy|3c#L33-0m];{o[ WO3ͮaQ5v>j}:3MFٱ=3ќ?ձѤW17O%Gw&?ORly)ԲZ qs 21,Zc78?*nGLr?n7ga&Z_iuB̵\mΡO¨ԜgꜳjEX*W;^tDu8#f L# v~[(s.vWjsU~:~ƽq[[ Wĕ/0|%>\̟VSݖ<,rFjC^P/ا_ژ9C|0M݈Tq5fbЇ5!g,]+V|ށטf_E3ۦVyn]o¶<%.3.Ag#w5?KS1Ὸ1Y+f{ ONp}-K,ڥH/G5 ^~ I1%8" fZq&Kp\ZNMM_"'F*۴%I{^~v?\-oAR\>Y 츺:xbjWVg)jwje0tս٫?1ʽ{jGx׿Z<|]׉r|fUWfN+r9J#vIfӐV |=.{\Y-u3!TЙ\PO-ɺV25N59{E;7g#vE !-)e݉儳XeNFf=4-Slz M m7ã}fӘ̵Ml?O5̟5{il6՗nz_K1Օ~y\WO)TV7BLYGtjϠsn~y=w˔v8J3E$W'xv{m@/9-E:/0`#v9-(:?őjI#ijFdY_VG & kR6Yok_yTV*K1ܾ**Nw#OIhXyr8O&Vg;UH/ 3fVH6.&RQ IgaF" ʥz {;{ؗevt:?)#tǷRF]%ƻ|l[H0;O8)etv!0>N|NFY`}?/>&Rݾ$}]p4Čy='l=ov| 4!Ņi;֘QqqohnYeŹ!- 9xi$TX4Qyͤw9ȠegnN߳E31 ȾH(}{"8BP7= X>*SG PsП~c,iŮ< MvkFІ)K?oVůyv^h]$rN' CcΉnԵiPPh3vGv9cAPi dQBՠ~ڦ5W \i[}y]q:T"jyɪi4sbGjq{f`b cQ„1'77ަ 7zI+Ҁ560t9j_MC!@veo%L )H S&djz˞BjiA=JU ּa|yAr9d;q03v{,CRQbɭG 4$"S `DmrV(Z:[m(hZ=ߑSUz4hk67},əhBADŽ27R3U9uɁ9Ls\ ԪuE7 Z[!- -oeOMifJG2/th_}cc0B5X~D$z{eg+4ViwF@h? )ȀV]ś0nLX44JiȢu:[pŠɬ{D&{ey4jNU7^<ҽ{ PSzvYW/>Bg|aQ.N⪂덛֩dni*2d)@AlqA\6fu0oY}njx{r V&ԑSj|nta9u N_V!j;ʊwd>|dþS2>{ђfW #F~7u^zt64cՓs5xnͿoooIvveif 0χ&&s;WMخk@=M@$찧OH6>QP A%JN`yc qF%%{>6M.7]0~xibvZyKۧ; J5sEGA}z00EdEԿ>UK63w~|eLۿ/Ȣ"@%YV"3t wקː>+R?M/~P.^W'IzuҮzu7 '(,#S1)`Ρg2m.1SX,Zǵ?Go<b2ȕ#wj탎?zQ|EnhEZ?ZX}H\}X,!sJ~,P= ЮBf(q\l0>a0Gh+V9U;bހuq^Bq٪O5)*h :B`{Dͻ@5g숾p-AMl(ԺAB?LTh%(K^Bq1sK?[^$Vl1G$oB 7G} dbFrki|Q6Ay?%탾}'7U J3Ͳr0<^9PE,0V0Hj~=R/.95g7"65&1ZJ^;c+SpD* [h2ςY04L[gl+ݡzH=^zw56if *I[ɛ?@aཱྀr]z*lȎ,7FwZK]˪u/]t=JǮ}k?#kȰϘZ3Q+uY&G=\ȼ,>Q5]%%kS zy!Q4*!>:6%F0On5OQ G֬5Chm*LnKj )FX>D<$od`!A$t3'=^[qgcj)d|Ǫ0 @B\_<5#)Է§89|<MY4`0 C`6$纇}(*9:' p1f2*=JԐl*: xR(iTbs,vlE5*hH9vMe3)` &F>1LAqֿ々`iQ-NJN5` JG,w{%, W\1Qp6vZ/L%#"뿏6enA;Jve#/=z+Iz?9h99< )hiJ )[]a*H$R ɑG@ʱƹ2g] Bv pQxlimTm)d\yxr<&$~L"ᄑ7NOC_Z﾿c~Ȑf>dE^C_flY"I.8E M{=%,^!M">~{qv{R9u׫?,ϗ^4mS*A<OCiV)N9?ʋmo$mLX}AoջqP tcPgm hxA&Ÿ4 8cQ/fr׼@P(+!{:_;i|1>`|F(ƳM rD&@NU@; =nID_HW >`I$^E E}Z0MN7 # 82^(ڴ}r"#4-3>1rtL(cY?͢RwЂ-ݭ a4P"m9k:wa+[{n님aV;,~hqѺAxƳڋ0NٛJ؄ԟ<4 W\~=r-# a"(2sh`!%_"=N&rrO SHa*5i(H$ȡ,8rtX: 1,6Sipe;CRͱ3RP6g&0J4Kl"/ecgi$SsQ ii8$oDZe;Z@%X c{IыEn0;4D%% v:˕!S hE3$$p$^9hu4eQDTՕ 3),ꦬpo~7Oľ9`۶HLYRy jXJ>mp,[bˊpgpxȀ8dȎ#;otjJG/.l%CPzA.}{ ;Hӻq+NzZӾpʲtoÔ*]2^|!A3]7z7kqhCU>,yۣwKr,Kѵ<:nyzs7˛1E>VG_}EvH!;Br7;8Zĥ;][oVAVYws]hmNEnExQ>DePE~X^^W[3+ TkЬ:C8 iBnlؘukCTdlN?k2N]@־g~B=c;x%W9_7qҮ |Do@?ZGzE PAF]ms7+,}hbU\NTd^K߯ARҐ"E RDzth>kϚ/-i\lb]܌ V0Tp.Z!m-Fp^ad\*ص<{nafjs`7!!h?B-SIjaF[VP`1%; ƚkOiZ6XA UN-o -ZjUQ)ߴzìWDBs'!& JQ*6bΉļ)EmAD- +&I4r ̚LֵɊQMqI`JI@([&zD$O9W]Ӛ%$ T&X ɤդ2(ؽ^kN;>/FZ~K X.ZFDe@סhvHG!\.a~/?_w+ȷwz>wWY\h(=?//WfɇY7pEqZ,:Ox앣 p6$ u,!{,pm@gL.pެĿռZ&N ZT|S, >s 7ߊۿ 8ª$!((F <߄>~:aګnM+rPLEy#JT0X3/t_8߱_KI; \ONӋxX.t7>}IܜpyAJh B/׿t9&$#f+mcgt K;CUfR:lupRVp,M+l08D3$|}THH;o\.|W꒠ʣ"m+ϥONNH̬7]&_>X<8 gnp[]AE+9s$Hu> dw6)n$uYf{qqyh9q_N2?Jf7Y$>7l2ߒ)ޜǛۘz4 iӮ.@Ktp tU!sǔ-Gh\ 7\KQb&|7cۛEtvf2?X f&#m Ԡs܋֛>m®/|Y<77<\Z\#"&ms UWilY_z|} rwA7~{t Z`]A7 "j}7ɛkz+fO6X;dB5ܺ!^*G#̊TmW7իVΖRCoWNJnSkZ<ke۟!!G粽%@O+6)_s\VT~|=hN@z-ky5[Ȋ3JR=`_/z ޑ xl0Alc d d9p͆1eq^8x8M6!dr8  [ayy sF[IHu+ ؤY]%A252(4lӀ̊z>Nѱd$~6O&e'=PW5U2L ^b"h\ù@$XƋ6@lEKf{ݣ(eù=cR #7{Ed/9iT[o{ؗK*{=bwë0rlQ% C;p; { 9uQ#ýy1yD{N>i';d}^ţO,F0RXMHF:EV1Ťx*9Dƴ1T|výW%?ރ" v(.g8re•Ι+\;xkOsivLBT Zr(Ni 80QEMhX6d beR;P2|wYVȤx{\>^RQtoz~rνwK6jn):ͷWlA}]{YD9Z[VnkGdk9m9mM/90 }Ol!ȓ<Kӏwpяrǵr:ǵrY'q=Ee(K)'f^P׊6(V4$GM&84)m+T+zD)`X^X6ĝj)n*Q<z: Cc:,סA i#a*yOh)H`\E]K^{ZrA򺬆B,0JDʴRG"(IH+I1 1m %RU+#STV0"T6FJ`{2f͜TAS] UVVHd@W`:OaRX  Okhm=\ ɢV!X :[]*ArۺƶN }ēS CP^(]x 4tj<[DtM[۱5"vZd&rMh fělw¯MeiE}m/blwW87Ο맱l{pTQ Kk[{#c_ I5BO!W+{^8Bp_!NvfO^Nqٽ4. ;C(-h_[BsFeMN(/ ޱFY*q=-yF,jo)/XXqPugNZ7 {,XXQ#w׊SGW1~ 8j;r?=]?p#*iQ+rýGzigp;8Ƨ@v$>o$9*@P{x`2,%>ikFٯ[9KLh=$3'4EPD+Z :&8S9'!x:^kDS847*l9v?Y$ٸa-yE/ӣ3}o?Nxx=w_OsB*v!9.X1'oiMI@`O&Pg0=54Vߗbt./]5$ Ϙ5Fqi:7S t׽*DBAMN5tAҵ zH 8U 6rT0YL/ FKb^'/ZVsnHLyxﺗ*a^ۆ8\I^ZYRH:([c[gɁmlFodHN;=w!)c JrT2_:Yᚢ䶷û"ٚ Iם~Hh!kI] &{1Vk|a&Qxxb gwבtw{uG䶽p6lj4-} i(&!Fl$\~+fsmبq>50d=U [[+n&|1XB~aZ7  =MIb`=L;Հ~rH9ګl7m2PZeGL 32 -CkM?w;L}C/h~4\Mq4?{iP乎c7XA/AϒKU-VbH)Xu-VR$:vpaJ=֐7Z+Eǻϥ[x*ߺNF)͗XBg?שkN$p־PX~|YqkoEx·At}Xxp;kq/V&nu1i>u#.td){(jD@Q`uȁ{ a:5iv${+`=t%*@kM{Xz!iz!Fl͹NNi-g9~(.3zz]foJ_zطʌ>ʌ2jhpFuV.<::{lxR̍6 _2 ڹ{yf^>>2[ݫL1)ԖDŽuYF;{(2N8 *eh}(= jދֺ 0ޙG,A`Ҳ'gA?k^l>9m2\M#6Mf1~5uIVRMMf(7Iqh0 @yd`b&2I>X)hs涭ںQmj/3܇Z/!DYBETe9T &Z؂5rϼiG:}&>ӛiyW@ rIxA(Ԅ7<[ʻ>4|cLG}ó8;J])>BGpT'>wƻt 5@5ܑ@#vqa WB,@cYrF`--cm;&[~c[w;'[RF`x{R\?1\iDgy0VnH QY-(YBqcyO"wc8"u\8Փ= PkDN7Γ&Co/ZvvXfI2Hb#rRRvtxͅd|l׃ >xRFsJU~m%{FE.pQEEݿsIn߹י +5ڭ~Ȗr2Z?}Q|ZGtK ^-ȅR*?R;ӑY++dttF}ۂML۟矾tiݟtss}~Zlżo kT;nǧ;Vn8UB n>޺w >\ -jȗO/?tラJ^vqYz=ȇ _~֏ۆX{=ܦ}h241Js3miFZk~||?xY==:M jlbuTC[ 9 h)%M+ic3Ms~F{Bhn.Y 'g,խ{J cAUÖںGH`[ _H[lV?cR[ْfl٭y(B 1jgY`-삽5] jL>KE 7\Mr*ۏVLY*XrH[8E #Z5M- 3]+*xb}]ƹ4ZTֻq9:L/uzV\>ެ?n:ۗG7^J4Z[nI*JPJeBCr^zN#J{g <jЈ<;\L(!}8*O7,EL(9-e@hLSvrߒI}zp p OzsyZ~闲߮~.:x_FQf|y3+g|!Nb.KR#g4kHxm?)pJ\?B@A&q5(bTZXQ"Mf4e #\ 4C?bmڽߵ{ۮ/E&zܵɌ%2^ 7PM! . ҟ񣐷 oF_2ZbI1#eƣwAPqՃ̘ #(O(3BD@:P!t*cqac^EȀ$C6Tƍyn֍ 5o哾KuT=Euh #RU.#˪(2!V̙M l2ʌ.)]Cm#QY ԦV2*˨ʒ pЪ≂Mfεʨ^*YwB R-2:m Œ_[OЧDJJcg>jQ!eF0Y eh8NO9nېn_w kњ}E#%b7dH7,PJ,䕔%ekZ$,Ί,L뮐3cՕ5R,JĂjUvϳ9u:X$wT~_C?4VV`JEiJ[pQ]Ti *!UM+iPvWb)yN"R7VK(K*ra*(缐E!SIR2d*mZDSgxhE-YsYg<>^.a>=~sfidtOQ _s}i6^ܟx ƛg X c *HAi+8{)JQmOhClBZQՃݬSoO+\ t|5bDZiMbݺ(!?t;q°1bHQx(@5>IYI^?zK^^߽.Z߽zWbeFO6* +pLohWPznt C;!9@S (!4bbjeL5?>2[JXj$/*$kj$s @z(sáx㫙jt@荝\(UOC*&wԵ: &91{19[+9 %"tQ&H@99U¨ū[tfۏD6bԇD¡=dܶs>RTV&ӶRR\3ye-lmJšԹZB=! b+4 JA6Xe2+SYe`*3h le'U m=O߮o85G) 9ӨQb{~_pFAĂ3iմ[wOEBAXݽ+$()l9!?G2Pz2ѥ @snbf!g{Z -A).uM?7uB:&a1VDmwQ]|{d=b/ t޶r`yڼd/{x ]$Ԧ)'Z!-&/;E&B"$ae1+dJZ?e^cW5b.Kߞb3 ymt.t(zcz536P([i{Cavסb0?GL,HQ Vyyr)Q yiEm,'j|<·ޭ~fKiY{qRz(u5vͶ>s^" ȪHlYQ^!bsVbf+YL $ڢ"w^y^!Oh 9tX!Ad49DVT%(۴@ImYA_XlSu+x Η :-&Pc}_5)yI샲œ2լw;]sc ~'"E9Jz ԹE1vǧMc%.Ow`qmx 'vHb8ǛPE벸v<p5zw.9L 1\6/?7A E~8߿ۻ}bt\Eq&VZ)UMI d=:?G3-k%+kLјMtqx>Ƞ~ {m2w%C ~mT[yVniu"*=tt܈O@85ȍ6,޵F\tR&<| 5o)9@. r?U 8/էsymdc}f_;*t4F A q ]m۱y]!u.<-pqNzVJ{yt3M <2̘*$нΫw{{თ჏$ՐXB֕ˡWH䕋4A"BeAJz0c7 $k%oE{w`B6} eqYy=-KhF-Q.4Q#ENg#cO:e2ta a˧W;ʼ䎢KG-vZ3:pD$-bIJPRgj䵙 9qMW\+˻Tk@toRd@fCD*?kB2.*iI%*)o]x tX!=D+@Eᜤ \Q p ŭh{+w^RȃTDo9G` u5pE9`/jj:a9G UpFQQyeN(0m ꚆSp-ScKԙvbԼZ2'㉐ J01Ds,t\:&J 0%1m<⢾Vawx*Zp i-E殹n ]k|_FVYN@A,YtbܴYᡣ'xnbmş;7__lt}h7ypoP.^yBطZ.G\=aIu}/j{ %^-4k%,Y)Aeㅞ8΢ 򵤗 рvԝo(Fq3DH1GsfB(Ug~87|q\|~rV_o]f&3@qz}|(++ pVd5JR>Be#K"K( ȨDdh~ |EO-[]/zsӂ )ݮ?2nʠirwyrd/{!DAX;oKh\З=or%4xHp$$OBeWkdå!6 IhΊr/ s[!ffwwsv=([=x7vqⓝ_̦bꋬY)w:(aE8}Z3ví~F^PW=;F܄H2$˼R+HСx$" 1& wsjgk# ;vcyհ%L%JF$ Y% ;FT\ ǿQa>VyЉ%b5#'idAy.B]+^WoD(W5[(Plv!\RK -K|  梚I}69j\4;ZJy/w0ç=o[Ch=JS˕f6?lܽodJXdY">͟{F6*H?<Zv}1Mٸ7J@uެ]_Ͽ4AG2о08#fG[!`s9Vn%Ps +*(;Oe 'K F&;fDItH+$Vz~#zBT{T9pClP\ I,jgqL 0δz\"-( m^-+MEYn XLnd,ah2Ygb'\3ʑDB"W4іTҜK6Hd׀DK0Ē#h1A "YqBGè(l 9vwμwu-XtFbw]+mgSnGW'$5ZQ/$w/qS~;I,өFO`,iRX_DQRe*EL0&_n$-v ZhC-Q>ABx%GhTd}OQ.n^q8K>OJ>4ϪҒ6z*hAQ4%ޫF=,Z GPT=$췴itU%CWܭc `&1wO; Lw6x~k/JW$u}Gţr"(1ZbwY[Yp웈4VP*?Sb@q!Td}1Buwulq8Q+abʫ~.8ŴEkK"1G9/ N%@>7Z}Q"mpkč 7.ٕEtc}\&_NӢx7g;R6>nqsx{4A~>\,n\\ l7 \,OcVajgg sD寢_<0]c͜TTu*wdє]fPwo׿{4Y)=EG}De>UʄTi۱j?ۻbi"e -D!-HԌdT fyr)I4.HF.FxJIB/K.QHMP U2TtJ" 9 j;92-3x=VDG,ʞsxR^F'nUPbτ_D&i\+;7D1hW]IʊFh)~|a'ch$RB>'ڍC;ѭsv<uþ{r8L˺R0 65K^k o$1DKHHH!hI4 2R`!WΉF@kmzlƮd]y,ﳳbkɧNu+=TwM 2XmL Ny 4@ip UM'!yk#jmQ Zgo+3#=NJf;hT캡k=<].(C(~/fop^ B4fۃpo׫2E-zx_mO>\1m+"Qʣ˧G'mRݒ~a.9:v^PMo :Pz/"9uF`+ũ"ߴxf HlzۍuN5ofв\tŒԧʞ=-ASe@=ʊ-ub O{,,hc6:2e`p #lZ꘭ <}:e&Ke gu[}YRbg̱փU䅴it!lڴ}a6(x?y0ޚSXgF:U,D'Hrt~LXHX$1ڋhg_(ܖ*kvFI": x V:88Dު$|Jı^'6h JH.'oS=׸ezå]wo@ a*Jpt@amlVim ?^׃_~l!QJ4fB@Qdj;sRv9X%$sɒ]l+hko8q[HLx3Y~A@5ђ@>/IS4~*zOR 02D=I=uP)5;*hB튦r;]Y䢗Y 7m֊ >,Ab{9ׁN@[5*$"wxZ3ڢ@glYzm -AܳE]QKKgyg㴖0t8M0 LP*kx'JiE:F~4cuό*whKg46x+ͤQj%6s#&%3k39ѠzJq.Eٌ:F\ONRD3ŌިѻkQI冋 }}<jKQVm)~-ս7$9A/ WwzZˆm~2@)qDvSs7GEfu\/s]-mEXOcA٪ǿ|$A7q~/Wq@S3oRWVA[n&Mh`%.[\!o\Ete'NGX7Zg@ZZN7X .Aͺ7Z>4䍫hNA{T#][GZZN7X)mZ܆n7 [UtSRٺX9/fpusu\i{C> um_?}7s7<*oo_@zZozk/r]xjj}%ҮV{F90idEb0젓: rP2Ef G, S8J(RM)>HhWCMkcjxP]~tHEy1KTOei$1,l|c `ǒϔ['jU$/T(R"<+!t$ȥw{M2Ӂ"{{+K@S{ÖʴMg [E=t`=ľSE&N˗k$ڞBT9,QƦ 2" 1>!*"(nLmHޕfKrkIf5,BXC2ROZd&eZ p2hMb$JEvzEAʸ(,nHڃ+R!jjj1QlS0FX"eh;@Pb53 XЙF.yi{k i4gDoxrTe3H8  "+"Ɣ#cV~*WYOT0An9xFkVdqeL2Lv $Ղ(*vާf#U ^IJ}UF';%YwK yb6x(FÂx`?=w[f|I0 `,Q0y"Fʙ>gr)) ^s3)oZ5)El>eJq F4⇿Vo0we&НK)g]䃰.6uQz^{r)"Ay501N&."}|#qZ2ɮayd+I6?WZ\?<SLa]~-5c&8Љ|6[/'>ùVY+{ˊS&#nToez˷_kEE}PPdI0\3ε%{N)n&v-D>`qߏ2NLЩ&>DHh4,u#vo^a9'lIVӛQ}zlg2Gu]H3I 3ތvӣKq2zqI kq0sޕ9Ϻaftw14',t匃 3:OSچ}KGf a3" %)I #cTGk P˝29T"%ȡ,`"ZYXPDLu5ыn_跩3 5},RgrUN{:V6#$jJ;,r \} U3iGL=GTUV-~B4Z@~ t65˴u.=4s}S܇q=^0"U }֭%S.m z4Vк!o\EtJ;֍VvBV˃թF6a҃25 )n}hW-:yCxϓ4ʪOM(yI [eת[N*!bW5\ZNnj:c9=F_7jlv]gγfJd%z߸XUph1P.f-؈W(O}Dӱ 'Ӈ<]ax+}7v%O96#I+k\9 _|bURD>wȅĚ+܏WOLYw_ _k>3`O#^=%YB7|jL$?oQ+*vn}bu %q lʂ2Yp1Ss !D (6971PCI |8unkբPJ۴xvcV}Me@sE`9<_ E-ůwO/b>&LڎϨcuG&aHZ{I4kz2ϻ4Sӎ;ˤO'aW$1]gw edJOo8șzKc3,GӀɁbmW^pvrv 3LЍ6$Fc (s]ْw?;]p5B ӝ]$`4/]DMÃ"Ak% ~V e:oׄ<4#q۫\S07/tvLtgN1I4U78b^^aP[.QY2JJS: $0)xj,YHcv"h8)ԕ|XdDE{V2ynH1xELƄ NkE0 "jF@ #HJv( (FQ:Qj˓wkc ʹ7 u('TQ6B6$FJ1Q&QE+0j.䡞Rݨ R B+w H&8<+1{')PF Z6E #ky/+y;w(_ԺI(7)b<%^*|,O'K^6>~-Hd>1 (*0>A;2w3N iaJAsJ s{iJэs0.͏/0b80G ?J d✔.coR.G]FU{ãnfoԴX)a(5 |eAOfM)qJ#=;8AGDэq峂oژVɔ<(B]F'`uK} T ރq=^ ! N}Y5H} ֭%S.mhejh֭к!o\EktJuup#jyP:b&"PFnw [UtKLNJPSC]Jްw,Dd +xc,q(i~. m!SQVvѪУL~3|:PݽL /..)A u<5@7Xʱ+q+ㅲ3QWsܑk+Dj>:q{umqYWѝKs)"a4Oc)<u\Dh2Q#b pd!Bd! MPň~j8< Jx oOE&]L#)l=/iL|Y x@Pst*P!9.4eP{O&hVo W3Q;aap~ CFh3˫yLryф,Ɗg@gualnC߾y&AHͺ*@]?ũ7͹ ndK9&ЌJLJ7g3g EÅ_rσNii|?g &C鮮%.q&ɠ\7iEB 2O'o@E_3FDdF6:j!\AQuxV9A(^_kX^_μVi͛,ioci4ht> ߭%ƍ~ UeCIpmI'tC`F/% HY@f DVdŒ/.h^`x4b1h"0 kZ!I#r~@#@CS.9QFP M-$i4:2q4qr$ur\km|X1\uReTFHI /_ҙ zFlЊtV48v+b8|7g,rWсP*0, *i}` !0>#98'}YgY4kY ] Zf;$s& yDo84Ayt6Fstue˵j/2]G,1aS"1$d 6\5.`:)/<_B$ Nګ>PlJ?FkbI5Dh(I>{zL䆦jyrR9L0pK }< mQ\JiEF"4m>蔒7ڄR emo0:m\,™TCy9b 7J{RF!i%fn>-gR3NIФG{#L%1KY x# ՘ @~7doN Y$1@ ʊTZNh9\ gon4~g#sfO Fq]ϱqH?{WƑ O կ!g m`'~iILIH#¦Fz륻 S>d΋#@7o"\U />XL''>VM7pS BjW1dL(UMx' 0Y4NGh\DV`+`ǜZcP8Xs.%BatbL`E0Z̗Nl,MNb\㭏M&H6PƪNƆ9j=G)P98G'!ː!+f [:\pq+$ܔ8F{0=xk%ve8: _}@ 쓣47.e5vWݍ%SZ ϧ_gD% 5h0J8r# gHo h6觳2c"6i~$jh4ވ`dhd9`u Ϧq5z0Eoޛà K=5#"T'z:ڊ^UVt#I`dh84q!HaJ|̕g(Hp[ >:4ga&Y)hx: ~ xUB׌ ~tyh* C]k$#~'i Ҷ4Oǀ/*1ywg>`KPϐv-F14 /jvd0S(n)j \Ja.^ՒGo0Wa?|ޡaWŇ :0D!sdȤWsBVrBWB?'Vʩf8Q\&Ƴ)9du ވ׉~c ?ʅhbD6yLE@}A8gFVU%`5e4 'ljLJQ4}\F{o̹J8* -vmӡVaN#~x7"'^a`+j0ez,0 =lÕd/NMD,vri͝Kɥ@/0Dpm!Jsr`cAK (z\tLT( ~ f?g??3zY]??0KTvhh\ɩhiK=aBK=NW Z'P48y?A&#N(dyS؝ [Ѕ1 4D m'7Ag2P8(d0*zcgb7l`e 0 4"^0Q,-E,Ф"^ѐJeT^iƴ6"q1*]EblKAύ[_嶫ȶ@ZÖٚ­rGl{qBk˷Tߠ s+Zd *ZcaV1ZlYBύnO*#ٚ{:aTFsc:wsZdNivȶ~%@CE;>hhm/NP0;Tdk+gsnkm/]3.k&z@V w l{wTccF3CE+@]*Jl{Jvl{G{%m,$hERpOPd hPCύfNov\lwCCύsnT]d :uح"[[y?pBNm/4n{5cŽ.SW^E'Y!h=>ō?~tOm[d ɗO9s5IUyaUFsFHnq<UL~/czoFGȑ>!%b4\x* Jw# !,~V }LYK$`ON6AkB矻R%HZ(-gh#d kzx9 +*>EN=|&?YL;Utr^8R(׭5Դe:.4R:?W nlqZ\YFt،K*b8 ${Mg9$i*쬭L6e1lfgJEL9yucd [l*FXiQEĜt°;~h꛽;k_ּ:!B꒾&MuZ+lwrpT· nOTFq/]pw;~ *bJvpyvN'{7]e 6QrfW?Kl}ߛM?5c|A3b_U~MgyO޵'v jGͲixqՋ+ؗ}[l%x;f}Ck)Yu÷;A<{皯EH!V*9ӷCɇeC o֡^c~0BB!DP;`T-zЊtR'";W6jvRU qR}cB6 kZrm4m|Mڴn<20bW$[^U&c2oqÌ;-.A;ƥ&(>oD@@"9F2ӣ-FLG݈?jwvGe\'5FFRrO{\`2"q1dE42O E(Xƨ1]S$o#E%ޣfjG7 ЏOC:&k._Kf.$5x5$ȏіk+qTkAڃ5cKX,oʂΝX:ot:^uV z/[)dk S~o[Vqir}GegLiD惛; /P J\>8%yX\p2.2‚HD^g)n^TzP F?5؂jVύJEwgg6`3ؼGalFlP\*-Bqn cuuavvwOFI-ww?ݓnW1ײõ{{6o  @m0t~f\ uk2qJvgC?@r+.~_h+ݙK~&=~5ng?Cor\~sroO0`A%|];7L7p ~޲ЏZ*#wjC? 3ܖ~:gؑ; |0J1R[; 7g?wdCO8#xӵYvh龩OV XY5s}3o.A ܧ4r6n܉KJLnňJ@m֡qmM脉8HACGBQ f0S{-=Xcdt~ 3|8;v8*l Ef } 9_|^P`V _u%|=fB CWj=mۇTPB 2`@ JfA&1/Je`N5goWE$\6> hX$;\hkP Ȏo]/PEd9$+C"#KM2Y'dtPe JATk\:3R,i&|Gwh 텦ct+J&۩sjFSH18py'35:KaCT΁g/KœXXAe Vh}0\}xQ)Q(U'w" F3+mdF{G?c0&*ˇ0$zk}@&&11, u}S^nAGvp&*Q}́&޵5q#l@ҸJ)K':y9baD"i^|PEQCJC,Q3`ݍTy@x0E)LȹŏQL3Zmxecv6mL!gs躹HS59GkHTrFZ *BZ4ƪVPqe6Qm}Kݎe4\.ERhx0?XȵMF/߲d^`j 4R8Lctȥ8\=I|Dۅ!mI2rbZoT)ixʸ*htvjmpPE*C~ `$\ 6Ifa^e im"|t %SdH7~3ZiiyO xiUQּ]k R1ۗ^0%L2wXCtdNaBnUPn|CycjLOeS r*Y: (+xقb='Tk b5+1{uT,Smnukj)gմfE#q_qcv/x+E`10սf1DHLej&3#c m#cQ=VGRkْ0Ґt%3RUo%-YƺbF֯7Ybz+ d[t HRrɧ|`>[o 5ܧ^v֞Ren4y0WV;Ӕm_׷&g?TY*!R4p,[㖭9rS{ _Y&s|g{{bq>':ZqجS\3ݭOѝ#ϿrC%&E pg0Ed:h$*t 5խ1ڎAA.A6V԰FĻ{f]&sr>|3X'#0hU3t*DOB~sō'ӦjRՌ\󦱄##geZ"=C D#QDuxuqps6Rz/P qogT+10L 6ꅺbJ4kgHO(UJn54E݈v&KhELpL[sXZ31KifTTs-q8륫]|"Ut $^ Rf[steL%dǧyOZPŢ $N8s!O #%Q(& e6%J@4s6TL7Ԉxh0 ֩J1&5 uI޲ -`#NF75iSau?s3] dy:Ή0!28eL׈j¬XX KP 8D% mZtnD+֡b)~Aܲ RURܒwCmPafEcH'Û9ќ+oXr. !jt8: 4Lp >/=9qp8 ӂ= E3ﻏo#F@W0#r5➋*7bԘC=#0*]nn>qB[ ]#[T JPgdQ[eA$Rys=2siu/dpKT1y [1:5b+ShQ;3~؀Vrg2*O C׈`׈0a3bm(Qyq@[G \K?[ ʘgYIA][$%L(3OQdYH+FYTS"Ù(EڈlψҠ3 {T5Y hqXfyfby4E>[Cg?ps'>L=0tqnI6Ă7Iߟ9 ^n9}>xuooځo -hU`E&ݴvtq1g_KhO?|6'ҕ"# i,"`V(*VJۥ8Z]BA)|gx  NV.WUcN 񷇷snRI9۞@<oI&-I;)Tfvñ|pn}D&aC5&5^cʦ璼U+\O2 ]MK|RiiAVfnkEpZ;T+pZ{Z*^JgsIެ8-]O2զ|7^KwQKtqB;iD}eAo~;&cʸ# J&7!wTt,EB)D@} *pau7jGC.KiK":]mϸ{&8&jۗ>VG dA2H|;,r6Dw?ƹ}G|z*?ÏkLŴ3u^VڦŴmzvY;Uk#=bV Ϊj-KM">h})qUx麄[X>>Ag:O|<:p~A74A,^̇E]K?nA?$n?օ\ gC@Ρ%4OF/yS`\⌤!z% 8&)TS?@mUaۛ<c{o+'zVBWVw)[MWMZӮLBI-tkg!}:,|V0~QhL!n|$$(IQ5>FZDKs4 D2s4_ D~Z|Aq;* JS2|Ka<0'XUId ~=~Yz,jk,/+h>g;^g_IO{֖E_sWv/f6x}xyؙQb-h,r|';~!s݂~:.qΛ?yfW~,Ft9>xry l[Ľ\w   Rv۴piBW`3D/=!gj2ǨA`klѠO8`y~s@aq 5!Ԣ)~jTaי;TǑG};sOF˵N*Czv7hZߣ&B0M,YZ<ίǝ⃺߱<z8->B3$F)녴jQݔb oc՞G6ub6пO$BѐgtJrVUTwlb~K%۴nq+Z&4䙫NU{ł.mYbS%uz5.i|Mg)|V(6*mՋRwK^2VJF+v{W*)(Vh:OLtTVt$EG%] GjiCT#}.k)hiz8dk='Tk>X\VR.R !#~nN9-+䧥b,S wtQy^*TA,b>UDY v(Kyi]ċbrQ}LΫ7b0ݙ}^R`~\kkYB ܈zF1;.{؎>VSxMssFm N W'k;&lQ:;TUGU2Hc bnܐari n5JtU/*辧&p$O89٧ J*}A84<J@ J L-IbojHEmK*{ ^Z]̗ޏW]Ҧ(wڌz5U4 ,lu.5V(CcՍb+w9b'-wxdrׯ`v0 j B}bOJ/I#)]ͧGp= f:1VXe%;}ŵ%P|ke#vPϝ^l(#ꎄUkē;('wP4x;]po d@ft!(8D5DD<͡2sr@[qB/nu/ KƟ9!׾_X bO@ xU!k[r([1 *;#5W@{&Utl`8V7Xm}(Z9ie2UV& oiepºcyvYa5XbU6+tT< 10887ms (07:51:54.531) Jan 27 07:51:54 crc kubenswrapper[4787]: Trace[998286491]: [10.887647157s] [10.887647157s] END Jan 27 07:51:54 crc kubenswrapper[4787]: I0127 07:51:54.531760 4787 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 07:51:54 crc kubenswrapper[4787]: E0127 07:51:54.532782 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 07:51:54 crc kubenswrapper[4787]: I0127 07:51:54.534354 4787 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 07:51:54 crc kubenswrapper[4787]: I0127 07:51:54.534759 4787 trace.go:236] Trace[468694635]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:51:43.117) (total time: 11417ms): Jan 27 07:51:54 crc kubenswrapper[4787]: Trace[468694635]: ---"Objects listed" error: 11417ms (07:51:54.534) Jan 27 07:51:54 crc kubenswrapper[4787]: Trace[468694635]: [11.41727997s] [11.41727997s] END Jan 27 07:51:54 crc kubenswrapper[4787]: I0127 07:51:54.534833 4787 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 07:51:54 crc kubenswrapper[4787]: I0127 07:51:54.545194 4787 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 07:51:54 crc kubenswrapper[4787]: I0127 07:51:54.640522 4787 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 07:51:54 crc kubenswrapper[4787]: I0127 07:51:54.961534 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.016124 4787 apiserver.go:52] "Watching apiserver" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.021133 4787 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.021526 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.022233 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.022318 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.022418 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.022533 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.022602 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.022965 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.023575 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.023637 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.023986 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.024831 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.025008 4787 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.027949 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.028081 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.028228 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.028635 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.028671 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.028702 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.029009 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.029286 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.034246 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:47:33.876653392 +0000 UTC Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036061 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036096 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036120 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036140 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036162 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036180 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036212 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036236 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036266 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036298 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036319 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036341 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036357 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036377 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036411 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036427 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036459 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036480 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036495 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036512 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036528 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036563 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036583 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036599 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036616 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036632 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036647 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036666 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036686 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036704 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036741 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036759 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036776 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036793 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036812 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036830 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036845 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036866 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036887 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036912 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036928 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036943 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036959 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036974 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.036991 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037026 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037110 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037130 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037152 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037173 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037190 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037222 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037245 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037274 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037294 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037310 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037327 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037343 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037360 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037377 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037392 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.037722 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.038448 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.038533 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.038808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.038810 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039006 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039404 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039821 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039869 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039891 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039909 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039925 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039946 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039967 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.039987 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040005 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040022 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040038 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040054 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040073 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040094 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040109 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040126 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040144 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040164 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040188 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040209 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040434 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040467 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040629 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040662 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040690 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040770 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040796 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040789 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040827 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040894 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040898 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040919 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040944 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040970 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.040994 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041016 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041037 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041056 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041074 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041093 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041109 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041127 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041146 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041163 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041180 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041116 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041197 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041228 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041248 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041276 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041300 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041330 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041423 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041456 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041483 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041502 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041520 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041539 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041575 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041599 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041618 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041636 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041655 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041673 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041691 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041706 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041711 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041752 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041775 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041793 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041811 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041832 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041851 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.041954 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042008 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042029 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042047 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042066 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042088 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042104 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042121 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042140 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042159 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042177 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042195 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042211 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042228 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042260 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042278 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042299 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042314 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042332 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042353 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042375 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042394 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042413 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042430 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042449 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042451 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042469 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042492 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042518 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042541 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042588 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042610 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042630 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042647 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042665 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042682 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042697 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042715 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042733 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042750 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042751 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042766 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042785 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042802 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042819 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042837 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042855 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042874 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042892 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042908 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042926 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042944 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042961 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042978 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042996 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043013 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043030 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043048 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043067 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043085 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043101 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043118 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043136 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.042954 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043155 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043325 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043339 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043388 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043391 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043443 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043472 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043529 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043570 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043592 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043614 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043637 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043684 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043841 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043873 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043894 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043921 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043946 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.043974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044005 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044024 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044100 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044104 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044114 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044168 4787 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044194 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044222 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044245 4787 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044268 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044291 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044312 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044332 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044518 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044541 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044585 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044605 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044630 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044650 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044672 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044692 4787 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.044713 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.045068 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.045399 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.046374 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.046638 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.046761 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.046786 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.046790 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.047079 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.047689 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.047679 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.048104 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.048107 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.048146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.047955 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.048858 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.048869 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.049158 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.049489 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.049580 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.050757 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.050887 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.049597 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.051176 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.051297 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.051740 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.051791 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.052103 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.052286 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.052356 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.052375 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.052632 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.052987 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.053112 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.054495 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.055504 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.056498 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.057507 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.057934 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.058265 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.058497 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.058691 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.059251 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.059728 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.059712 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.060178 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.060589 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.060702 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.060945 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.061051 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.061490 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.061873 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.061906 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.061601 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.051942 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.061957 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.062033 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.062098 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.062290 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.062334 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.062784 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.064532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.064999 4787 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.067308 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:51:55.567280059 +0000 UTC m=+21.219635551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.067696 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.069361 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.080283 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.069536 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.069899 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.069811 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.080466 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.070257 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.070470 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.078169 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.069374 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.080655 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.080857 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:55.58082387 +0000 UTC m=+21.233179372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.080962 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.081215 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.081658 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.081904 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.081968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.081988 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.082426 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.082480 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:55.582460723 +0000 UTC m=+21.234816215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.082815 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.082987 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.083202 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.080841 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.083613 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.083704 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.083411 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.083771 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.086985 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.083914 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.083986 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.084026 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.084118 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.084524 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.085015 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.085036 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.086197 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.086741 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.086803 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.087089 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.087085 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.087235 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.086138 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.087525 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.088044 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.088142 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.088355 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.086712 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.087808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.086966 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.088811 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.088841 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.088922 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:55.588896552 +0000 UTC m=+21.241252044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.089095 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.089173 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.089202 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.089131 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.089281 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:55.589263729 +0000 UTC m=+21.241619221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.089320 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.089482 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.090058 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.090165 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.090193 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.090700 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.090715 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.091033 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.091493 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.091799 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.091802 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.091795 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.092374 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.092901 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.093834 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.095229 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.096966 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.097119 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.097262 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.097316 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.097937 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.098443 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.096261 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.099763 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.104443 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.104973 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.105295 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.105297 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.105308 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.105394 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.105648 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.105849 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.105899 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.105968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.106274 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.108660 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.108734 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.108991 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109107 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109157 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109136 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109211 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109230 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109266 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109584 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109627 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109610 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109714 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109749 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109781 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109852 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109907 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109943 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.109981 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.110293 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.110907 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.110946 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.111147 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.111180 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.111476 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.111748 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.111991 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.112034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.112063 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.112334 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.112520 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.112662 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.113003 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.113210 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.114208 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.114441 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.115388 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.115618 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.116111 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.119185 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.124014 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.124602 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.125092 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.126187 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.128968 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.129178 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.129777 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.132211 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.133003 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.134731 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.135465 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.136761 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.137869 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.137945 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.139233 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.139898 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.140635 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.142004 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.142832 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.143975 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.144639 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.145332 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.145883 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.145896 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.145959 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.146432 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.147247 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.147789 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148339 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148586 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148613 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148630 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148643 4787 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148657 4787 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148669 4787 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148682 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148698 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148698 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148711 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148736 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148751 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.148919 4787 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149058 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149222 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149241 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149254 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149268 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149285 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149313 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149325 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149449 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149476 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149490 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149504 4787 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149518 4787 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149532 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149544 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.149544 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150050 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150185 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150325 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150442 4787 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150586 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150736 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150877 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150979 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151060 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151181 4787 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151280 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151378 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151468 4787 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151575 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151720 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151821 4787 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.151910 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.152009 4787 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.152134 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.152262 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.152353 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.152440 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.152605 4787 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.152751 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.152931 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.153074 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.153220 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.153326 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.153483 4787 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.153622 4787 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.153744 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.153847 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.153957 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.154045 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.154182 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.154325 4787 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.154462 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.154620 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.154746 4787 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.154894 4787 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155021 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155170 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155276 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155365 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155491 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155712 4787 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155751 4787 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155764 4787 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155779 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155790 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.150667 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155800 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155844 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155854 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155878 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155888 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155899 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155911 4787 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155923 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155957 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155966 4787 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155977 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.155987 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156000 4787 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156009 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156020 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156030 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156039 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156049 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156059 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156072 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156084 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156094 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156108 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156118 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156127 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156199 4787 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156213 4787 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156222 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156231 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156243 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156252 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156261 4787 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156269 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156278 4787 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156289 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156298 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156312 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156323 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156332 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156342 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156352 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156362 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156372 4787 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156382 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156395 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156406 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156416 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156488 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156498 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156509 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156521 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156531 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156544 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156571 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156584 4787 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156597 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156609 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156620 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156633 4787 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156648 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156658 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156678 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156691 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156713 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156723 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156733 4787 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156743 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156751 4787 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156760 4787 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156770 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156780 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156790 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156801 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156811 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156821 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156831 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156840 4787 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156850 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156687 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156925 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156939 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156949 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156959 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156969 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156981 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.156991 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157001 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157012 4787 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157022 4787 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157031 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157041 4787 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157051 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157059 4787 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157072 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157082 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157092 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157101 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157110 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157119 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157128 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157138 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.157467 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.158803 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.159331 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.160294 4787 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.160397 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.162122 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.163011 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.163413 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.165189 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.165907 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.166988 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.167761 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.169071 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.169652 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.170837 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.171864 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.172507 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.172989 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.173928 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.175421 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.175970 4787 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.177231 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.177466 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.178225 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.178875 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.180273 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.181075 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.182666 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.183377 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.190275 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.204291 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.223628 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.232085 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.235135 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.246868 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.259722 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.270832 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.283285 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.295895 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.307974 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.321356 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cacb7c35c671a1aaa2121bf6d0b1e05cd37f7a2e3531251a1b4a56d381b97526\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:48Z\\\",\\\"message\\\":\\\"W0127 07:51:38.260378 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 07:51:38.260778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769500298 cert, and key in /tmp/serving-cert-1332914759/serving-signer.crt, /tmp/serving-cert-1332914759/serving-signer.key\\\\nI0127 07:51:38.543192 1 observer_polling.go:159] Starting file observer\\\\nW0127 07:51:38.545642 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 07:51:38.545783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:38.547951 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1332914759/tls.crt::/tmp/serving-cert-1332914759/tls.key\\\\\\\"\\\\nF0127 07:51:48.940921 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.341216 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.349315 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.382572 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:51:55 crc kubenswrapper[4787]: W0127 07:51:55.402469 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-49a5ef49fa90ba7dd3f4ea76e1cb2effd0e2212f43be1e4021cad0a3af69661c WatchSource:0}: Error finding container 49a5ef49fa90ba7dd3f4ea76e1cb2effd0e2212f43be1e4021cad0a3af69661c: Status 404 returned error can't find the container with id 49a5ef49fa90ba7dd3f4ea76e1cb2effd0e2212f43be1e4021cad0a3af69661c Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.404538 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:51:55 crc kubenswrapper[4787]: W0127 07:51:55.424791 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-eb8bab198205f7915ba55dd08a7964a0974bacb6c02e89bd3f3f3118a0d9fed1 WatchSource:0}: Error finding container eb8bab198205f7915ba55dd08a7964a0974bacb6c02e89bd3f3f3118a0d9fed1: Status 404 returned error can't find the container with id eb8bab198205f7915ba55dd08a7964a0974bacb6c02e89bd3f3f3118a0d9fed1 Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.662153 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.662353 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:51:56.662325013 +0000 UTC m=+22.314680505 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.662766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.662831 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.662873 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:51:55 crc kubenswrapper[4787]: I0127 07:51:55.662904 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.662984 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663034 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:56.663026728 +0000 UTC m=+22.315382220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663347 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663460 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663503 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663542 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663470 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:56.663451436 +0000 UTC m=+22.315806918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663653 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:56.663632909 +0000 UTC m=+22.315988401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663389 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663673 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663683 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:55 crc kubenswrapper[4787]: E0127 07:51:55.663707 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:56.663701501 +0000 UTC m=+22.316056993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.034855 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:47:35.628662276 +0000 UTC Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.223823 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.224372 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.226472 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3" exitCode=255 Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.226566 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3"} Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.226618 4787 scope.go:117] "RemoveContainer" containerID="cacb7c35c671a1aaa2121bf6d0b1e05cd37f7a2e3531251a1b4a56d381b97526" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.227271 4787 scope.go:117] "RemoveContainer" containerID="879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3" Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.227461 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.227662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"eb8bab198205f7915ba55dd08a7964a0974bacb6c02e89bd3f3f3118a0d9fed1"} Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.229587 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632"} Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.229617 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2"} Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.229626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"49a5ef49fa90ba7dd3f4ea76e1cb2effd0e2212f43be1e4021cad0a3af69661c"} Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.230821 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a"} Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.230840 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"89e13c9ab2974854ef597032becee5f7a47be69106c6883a052da96b137605d0"} Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.248189 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.266277 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cacb7c35c671a1aaa2121bf6d0b1e05cd37f7a2e3531251a1b4a56d381b97526\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:48Z\\\",\\\"message\\\":\\\"W0127 07:51:38.260378 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 07:51:38.260778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769500298 cert, and key in /tmp/serving-cert-1332914759/serving-signer.crt, /tmp/serving-cert-1332914759/serving-signer.key\\\\nI0127 07:51:38.543192 1 observer_polling.go:159] Starting file observer\\\\nW0127 07:51:38.545642 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 07:51:38.545783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:38.547951 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1332914759/tls.crt::/tmp/serving-cert-1332914759/tls.key\\\\\\\"\\\\nF0127 07:51:48.940921 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.283276 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.298714 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.314016 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.325865 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.338087 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.350951 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.364390 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.380216 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cacb7c35c671a1aaa2121bf6d0b1e05cd37f7a2e3531251a1b4a56d381b97526\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:48Z\\\",\\\"message\\\":\\\"W0127 07:51:38.260378 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 07:51:38.260778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769500298 cert, and key in /tmp/serving-cert-1332914759/serving-signer.crt, /tmp/serving-cert-1332914759/serving-signer.key\\\\nI0127 07:51:38.543192 1 observer_polling.go:159] Starting file observer\\\\nW0127 07:51:38.545642 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 07:51:38.545783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:38.547951 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1332914759/tls.crt::/tmp/serving-cert-1332914759/tls.key\\\\\\\"\\\\nF0127 07:51:48.940921 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.393582 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.408918 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.422235 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.441657 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.455504 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.468330 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.673303 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.673388 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.673421 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673456 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:51:58.673434366 +0000 UTC m=+24.325789868 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.673487 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:51:56 crc kubenswrapper[4787]: I0127 07:51:56.673521 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673495 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673667 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673685 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673695 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673670 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:58.673660661 +0000 UTC m=+24.326016153 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673606 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673830 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673859 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673727 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:58.673718932 +0000 UTC m=+24.326074424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.673627 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.674096 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:58.673979017 +0000 UTC m=+24.326334539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:56 crc kubenswrapper[4787]: E0127 07:51:56.674149 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:51:58.67413535 +0000 UTC m=+24.326491082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.035529 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:41:58.416028264 +0000 UTC Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.076245 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.076291 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:57 crc kubenswrapper[4787]: E0127 07:51:57.076389 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.076402 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:51:57 crc kubenswrapper[4787]: E0127 07:51:57.076484 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:51:57 crc kubenswrapper[4787]: E0127 07:51:57.076573 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.234976 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.238190 4787 scope.go:117] "RemoveContainer" containerID="879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3" Jan 27 07:51:57 crc kubenswrapper[4787]: E0127 07:51:57.238378 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.252842 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.276847 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.277982 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.290603 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.296820 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.304039 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.307011 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.320851 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.333875 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.347517 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.361538 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.377638 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.394013 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.415343 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.433293 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.446369 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.461686 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.474720 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.488100 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:57 crc kubenswrapper[4787]: I0127 07:51:57.508384 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.036407 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:12:05.014677271 +0000 UTC Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.241988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a"} Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.255618 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.267263 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.278915 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.299058 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.313050 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.326683 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.340050 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.355054 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.369105 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.690113 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.690205 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.690246 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.690423 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.690476 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.690517 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.690536 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691078 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:52:02.691048629 +0000 UTC m=+28.343404121 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691113 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:02.69110783 +0000 UTC m=+28.343463322 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691084 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691140 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:02.691126421 +0000 UTC m=+28.343481913 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691147 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691167 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691231 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:02.691207432 +0000 UTC m=+28.343563094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.690950 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.691326 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691461 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:51:58 crc kubenswrapper[4787]: E0127 07:51:58.691511 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:02.691503398 +0000 UTC m=+28.343858890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.829071 4787 csr.go:261] certificate signing request csr-rxtqq is approved, waiting to be issued Jan 27 07:51:58 crc kubenswrapper[4787]: I0127 07:51:58.841332 4787 csr.go:257] certificate signing request csr-rxtqq is issued Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.036972 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:58:00.068111652 +0000 UTC Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.076041 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.076141 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.076055 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:51:59 crc kubenswrapper[4787]: E0127 07:51:59.076234 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:51:59 crc kubenswrapper[4787]: E0127 07:51:59.076355 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:51:59 crc kubenswrapper[4787]: E0127 07:51:59.076482 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.297199 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fghc7"] Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.297661 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fghc7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.299657 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.299928 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.299984 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.318127 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.330903 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.346756 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.360120 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.371627 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.396994 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6cac1bb0-6b6f-442f-9db4-a25d4f194bb1-hosts-file\") pod \"node-resolver-fghc7\" (UID: \"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\") " pod="openshift-dns/node-resolver-fghc7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.397067 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8zvj\" (UniqueName: \"kubernetes.io/projected/6cac1bb0-6b6f-442f-9db4-a25d4f194bb1-kube-api-access-c8zvj\") pod \"node-resolver-fghc7\" (UID: \"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\") " pod="openshift-dns/node-resolver-fghc7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.400570 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.421406 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.435676 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.447487 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.462761 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.497867 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8zvj\" (UniqueName: \"kubernetes.io/projected/6cac1bb0-6b6f-442f-9db4-a25d4f194bb1-kube-api-access-c8zvj\") pod \"node-resolver-fghc7\" (UID: \"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\") " pod="openshift-dns/node-resolver-fghc7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.497958 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6cac1bb0-6b6f-442f-9db4-a25d4f194bb1-hosts-file\") pod \"node-resolver-fghc7\" (UID: \"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\") " pod="openshift-dns/node-resolver-fghc7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.498026 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6cac1bb0-6b6f-442f-9db4-a25d4f194bb1-hosts-file\") pod \"node-resolver-fghc7\" (UID: \"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\") " pod="openshift-dns/node-resolver-fghc7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.506517 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.511179 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.522296 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8zvj\" (UniqueName: \"kubernetes.io/projected/6cac1bb0-6b6f-442f-9db4-a25d4f194bb1-kube-api-access-c8zvj\") pod \"node-resolver-fghc7\" (UID: \"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\") " pod="openshift-dns/node-resolver-fghc7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.535361 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.564913 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.601008 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.616644 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fghc7" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.647602 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.673315 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rqjpz"] Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.673664 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.676453 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.676880 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.678734 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7642m"] Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.678773 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.678813 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.678907 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.679505 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.681080 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.681910 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q4fh5"] Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.682351 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.683656 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-fqb6r"] Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.683940 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.684433 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.685709 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.691249 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.691592 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.691988 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.692191 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.692303 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.692527 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.692692 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.692932 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.693408 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.693757 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.693926 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.694081 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.699054 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-cni-multus\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.699465 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-socket-dir-parent\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.699591 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-k8s-cni-cncf-io\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.699680 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-cni-binary-copy\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.699749 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-multus-certs\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.699824 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-os-release\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.699897 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-kubelet\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.700071 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-etc-kubernetes\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.700165 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-daemon-config\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.700241 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-cni-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.700308 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-cni-bin\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.700391 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-hostroot\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.700484 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-conf-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.700608 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-system-cni-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.700709 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-cnibin\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.703943 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-netns\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.704049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7b2\" (UniqueName: \"kubernetes.io/projected/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-kube-api-access-tz7b2\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.711124 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.733848 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.752315 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.773762 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.805470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f051e184-acac-47cf-9e04-9df648288715-rootfs\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.805871 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvg7g\" (UniqueName: \"kubernetes.io/projected/f051e184-acac-47cf-9e04-9df648288715-kube-api-access-zvg7g\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.805979 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-os-release\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806070 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806173 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-env-overrides\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806241 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cnibin\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806322 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-cni-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806402 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-conf-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806472 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f051e184-acac-47cf-9e04-9df648288715-proxy-tls\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806621 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-netns\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806714 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-system-cni-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806939 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-cnibin\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807070 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj4tn\" (UniqueName: \"kubernetes.io/projected/fa44405c-042c-485a-ab6c-912dcd377751-kube-api-access-tj4tn\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807143 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-systemd-units\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807216 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-etc-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807287 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-log-socket\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807458 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-systemd\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807526 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-ovn\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807613 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa44405c-042c-485a-ab6c-912dcd377751-ovn-node-metrics-cert\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-socket-dir-parent\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807763 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnwn\" (UniqueName: \"kubernetes.io/projected/0ff88955-7cd9-4af4-9e0e-79614a1d2994-kube-api-access-nwnwn\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807829 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-node-log\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807901 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-cni-binary-copy\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807964 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-multus-certs\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f051e184-acac-47cf-9e04-9df648288715-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808113 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-slash\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806811 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-cni-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806597 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-conf-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806908 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-system-cni-dir\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.806276 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-os-release\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.807048 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-cnibin\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808302 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-socket-dir-parent\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808487 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808576 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-kubelet\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808645 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-etc-kubernetes\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808707 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-os-release\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808785 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-script-lib\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808856 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-cni-bin\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808926 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-hostroot\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808991 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-daemon-config\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809045 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-cni-binary-copy\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809096 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-multus-certs\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809156 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-var-lib-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809245 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-config\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-netns\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.810016 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7b2\" (UniqueName: \"kubernetes.io/projected/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-kube-api-access-tz7b2\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.810139 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-kubelet\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.810233 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cni-binary-copy\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.810332 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-system-cni-dir\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.810504 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-cni-multus\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.810638 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-ovn-kubernetes\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.810991 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-k8s-cni-cncf-io\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.811329 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-bin\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.811470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-netd\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809454 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-etc-kubernetes\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-hostroot\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809507 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-cni-bin\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809533 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-netns\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.810736 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-cni-multus\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.811033 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-run-k8s-cni-cncf-io\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.811062 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-host-var-lib-kubelet\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.808050 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.809980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-multus-daemon-config\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.834450 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7b2\" (UniqueName: \"kubernetes.io/projected/e6f78168-0b0d-464d-b1c7-00bb9a69c0d1-kube-api-access-tz7b2\") pod \"multus-rqjpz\" (UID: \"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\") " pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.840817 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.842868 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 07:46:58 +0000 UTC, rotation deadline is 2026-12-08 21:27:31.159508378 +0000 UTC Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.842942 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7573h35m31.316569976s for next certificate rotation Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.857582 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.882145 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.900452 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.912903 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f051e184-acac-47cf-9e04-9df648288715-rootfs\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.912945 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvg7g\" (UniqueName: \"kubernetes.io/projected/f051e184-acac-47cf-9e04-9df648288715-kube-api-access-zvg7g\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.912967 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.912985 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cnibin\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913003 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f051e184-acac-47cf-9e04-9df648288715-proxy-tls\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913020 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-netns\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-env-overrides\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913057 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj4tn\" (UniqueName: \"kubernetes.io/projected/fa44405c-042c-485a-ab6c-912dcd377751-kube-api-access-tj4tn\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913076 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-systemd-units\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913091 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-etc-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913109 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913098 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f051e184-acac-47cf-9e04-9df648288715-rootfs\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913195 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-log-socket\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913131 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-log-socket\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913246 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-systemd\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913268 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-ovn\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa44405c-042c-485a-ab6c-912dcd377751-ovn-node-metrics-cert\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913363 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnwn\" (UniqueName: \"kubernetes.io/projected/0ff88955-7cd9-4af4-9e0e-79614a1d2994-kube-api-access-nwnwn\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f051e184-acac-47cf-9e04-9df648288715-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913445 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-slash\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913466 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-node-log\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913501 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913522 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913562 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-os-release\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913601 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-var-lib-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913631 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-config\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913663 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-script-lib\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913689 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-kubelet\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913716 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cni-binary-copy\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-system-cni-dir\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913806 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-ovn-kubernetes\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913827 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-bin\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913850 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-netd\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913921 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-netd\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913947 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-systemd\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.913969 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-ovn\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914275 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914386 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-var-lib-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914472 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-etc-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914535 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-system-cni-dir\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-ovn-kubernetes\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914584 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-openvswitch\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914603 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-bin\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914621 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914640 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-netns\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914665 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-slash\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914701 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-node-log\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914722 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cnibin\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.914756 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ff88955-7cd9-4af4-9e0e-79614a1d2994-os-release\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.915206 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cni-binary-copy\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.915274 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-kubelet\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.915475 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f051e184-acac-47cf-9e04-9df648288715-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.915584 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0ff88955-7cd9-4af4-9e0e-79614a1d2994-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.915587 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-config\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.915695 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-script-lib\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.915693 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-systemd-units\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.915980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-env-overrides\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.920012 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f051e184-acac-47cf-9e04-9df648288715-proxy-tls\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.920438 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa44405c-042c-485a-ab6c-912dcd377751-ovn-node-metrics-cert\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.932192 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.935405 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj4tn\" (UniqueName: \"kubernetes.io/projected/fa44405c-042c-485a-ab6c-912dcd377751-kube-api-access-tj4tn\") pod \"ovnkube-node-7642m\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.936578 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvg7g\" (UniqueName: \"kubernetes.io/projected/f051e184-acac-47cf-9e04-9df648288715-kube-api-access-zvg7g\") pod \"machine-config-daemon-q4fh5\" (UID: \"f051e184-acac-47cf-9e04-9df648288715\") " pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.938568 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnwn\" (UniqueName: \"kubernetes.io/projected/0ff88955-7cd9-4af4-9e0e-79614a1d2994-kube-api-access-nwnwn\") pod \"multus-additional-cni-plugins-fqb6r\" (UID: \"0ff88955-7cd9-4af4-9e0e-79614a1d2994\") " pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.949149 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.964786 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.980074 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.994890 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rqjpz" Jan 27 07:51:59 crc kubenswrapper[4787]: I0127 07:51:59.995225 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:51:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: W0127 07:52:00.007626 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f78168_0b0d_464d_b1c7_00bb9a69c0d1.slice/crio-9d585e70eef61281d63cb4cbe323a67195579c8a6d3f54ab2b20a452cdba089c WatchSource:0}: Error finding container 9d585e70eef61281d63cb4cbe323a67195579c8a6d3f54ab2b20a452cdba089c: Status 404 returned error can't find the container with id 9d585e70eef61281d63cb4cbe323a67195579c8a6d3f54ab2b20a452cdba089c Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.013951 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.019910 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.025990 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.035959 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.038112 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:34:16.751943024 +0000 UTC Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.039076 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.054747 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: W0127 07:52:00.055282 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf051e184_acac_47cf_9e04_9df648288715.slice/crio-05d452135124e832e7cd32209052e5d584d66c5a304dc432842415d240790523 WatchSource:0}: Error finding container 05d452135124e832e7cd32209052e5d584d66c5a304dc432842415d240790523: Status 404 returned error can't find the container with id 05d452135124e832e7cd32209052e5d584d66c5a304dc432842415d240790523 Jan 27 07:52:00 crc kubenswrapper[4787]: W0127 07:52:00.059054 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ff88955_7cd9_4af4_9e0e_79614a1d2994.slice/crio-896c0966b7257a52016f9deb5a28b1bef853ffadc0f56ff6a3ad934d920d571b WatchSource:0}: Error finding container 896c0966b7257a52016f9deb5a28b1bef853ffadc0f56ff6a3ad934d920d571b: Status 404 returned error can't find the container with id 896c0966b7257a52016f9deb5a28b1bef853ffadc0f56ff6a3ad934d920d571b Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.070181 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.086123 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.254646 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerStarted","Data":"896c0966b7257a52016f9deb5a28b1bef853ffadc0f56ff6a3ad934d920d571b"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.258857 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.258925 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"05d452135124e832e7cd32209052e5d584d66c5a304dc432842415d240790523"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.266073 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqjpz" event={"ID":"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1","Type":"ContainerStarted","Data":"e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.266143 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqjpz" event={"ID":"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1","Type":"ContainerStarted","Data":"9d585e70eef61281d63cb4cbe323a67195579c8a6d3f54ab2b20a452cdba089c"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.267589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fghc7" event={"ID":"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1","Type":"ContainerStarted","Data":"825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.267653 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fghc7" event={"ID":"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1","Type":"ContainerStarted","Data":"f8ec228b302dbe560c3ebd7e776e3f9531d5a90d8d9d34aaddfe44434b94f91a"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.268906 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b" exitCode=0 Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.268996 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.269066 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"40ba2420aaaf3b3cc8595bbc5c0c66623c4fd6910892664a858be15544316d35"} Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.299517 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.330872 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.348267 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.361089 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.382210 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.397000 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.410368 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.425375 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.438423 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.455699 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.478399 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.495063 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.507231 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.518812 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.532701 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.547822 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.560902 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.573992 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.588856 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.603142 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.622956 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.637539 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.651795 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.664758 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.688268 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.701046 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.713210 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.733785 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.932886 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.934751 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.934794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.934805 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.934913 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.943967 4787 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.944374 4787 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.945705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.945746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.945757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.945775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.945789 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:00Z","lastTransitionTime":"2026-01-27T07:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:00 crc kubenswrapper[4787]: E0127 07:52:00.961741 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.965090 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.965139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.965148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.965165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.965175 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:00Z","lastTransitionTime":"2026-01-27T07:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:00 crc kubenswrapper[4787]: E0127 07:52:00.978151 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.982629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.982810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.982935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.983038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:00 crc kubenswrapper[4787]: I0127 07:52:00.983128 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:00Z","lastTransitionTime":"2026-01-27T07:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:00 crc kubenswrapper[4787]: E0127 07:52:00.997642 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.001347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.001416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.001430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.001451 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.001464 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: E0127 07:52:01.017329 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.021964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.022158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.022253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.022337 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.022413 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: E0127 07:52:01.034753 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: E0127 07:52:01.035179 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.037231 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.037303 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.037319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.037346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.037361 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.039221 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:32:43.107697076 +0000 UTC Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.075951 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.075983 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.076054 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:01 crc kubenswrapper[4787]: E0127 07:52:01.076111 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:01 crc kubenswrapper[4787]: E0127 07:52:01.076235 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:01 crc kubenswrapper[4787]: E0127 07:52:01.076340 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.140791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.140830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.140839 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.140856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.140868 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.243660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.243700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.243709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.243727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.243736 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.273908 4787 generic.go:334] "Generic (PLEG): container finished" podID="0ff88955-7cd9-4af4-9e0e-79614a1d2994" containerID="5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8" exitCode=0 Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.274017 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerDied","Data":"5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.276169 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.291366 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.308542 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.334069 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.351030 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.351350 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.351373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.351383 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.351399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.351409 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.351933 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.352807 4787 scope.go:117] "RemoveContainer" containerID="879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3" Jan 27 07:52:01 crc kubenswrapper[4787]: E0127 07:52:01.352965 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.364947 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.382620 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.396707 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.411671 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.427177 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.449667 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.454696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.454742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.454755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.454779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.454797 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.463422 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.476036 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.490731 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.512460 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.535571 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.550031 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.557873 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.557914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.557927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.557948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.557959 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.563183 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.579954 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.596699 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.615249 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.644631 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.659962 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.662573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.662621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.662630 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.662644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.662654 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.674326 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.690136 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.703630 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.720189 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.736795 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.750726 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.767815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.767867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.767882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.767903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.767919 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.871858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.871904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.871914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.871932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.871943 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.976989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.977574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.977596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.977620 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:01 crc kubenswrapper[4787]: I0127 07:52:01.977635 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:01Z","lastTransitionTime":"2026-01-27T07:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.040710 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:55:48.648360372 +0000 UTC Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.080504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.080568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.080579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.080597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.080610 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.183275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.183315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.183325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.183344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.183357 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.287926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.288592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.288604 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.288624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.288662 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.292307 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.292356 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.292372 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.292386 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.292399 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.294570 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerStarted","Data":"a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.310520 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.335351 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.347517 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-rqwfc"] Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.348297 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.355585 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.355950 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.357601 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.357890 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.358017 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.372830 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.388304 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.393029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.393084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.393099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.393120 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.393135 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.402949 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.424828 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.441452 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.449496 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-host\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.449571 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-serviceca\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.449747 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scmbf\" (UniqueName: \"kubernetes.io/projected/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-kube-api-access-scmbf\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.455893 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.468674 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.484184 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.496003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.496051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.496062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.496115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.496127 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.499980 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.513110 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.524989 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.544240 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.551380 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-host\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.551430 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-serviceca\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.551481 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scmbf\" (UniqueName: \"kubernetes.io/projected/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-kube-api-access-scmbf\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.551863 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-host\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.552772 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-serviceca\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.559177 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.572186 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.577294 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scmbf\" (UniqueName: \"kubernetes.io/projected/fe88a860-6bb6-40ef-8ed7-c16f06fad8d3-kube-api-access-scmbf\") pod \"node-ca-rqwfc\" (UID: \"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\") " pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.584659 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.595895 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.598693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.598737 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.598750 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.598769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.598780 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.610329 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.625603 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.638077 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.648860 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.661648 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.668508 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rqwfc" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.676902 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: W0127 07:52:02.681916 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe88a860_6bb6_40ef_8ed7_c16f06fad8d3.slice/crio-15ca82168208a3627bfee0d952e096bff064168ebc2b72d0463c309c4b7bd155 WatchSource:0}: Error finding container 15ca82168208a3627bfee0d952e096bff064168ebc2b72d0463c309c4b7bd155: Status 404 returned error can't find the container with id 15ca82168208a3627bfee0d952e096bff064168ebc2b72d0463c309c4b7bd155 Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.697218 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.701904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.701966 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.701980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.702025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.702036 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.714287 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.731788 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.752497 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.753043 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.753146 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:52:10.753122142 +0000 UTC m=+36.405477634 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.753211 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.753244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.753284 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.753310 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.753443 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.753498 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:10.753485249 +0000 UTC m=+36.405840741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.753968 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.753987 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.753999 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.754024 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:10.754016959 +0000 UTC m=+36.406372451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.754054 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.754076 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:10.754067891 +0000 UTC m=+36.406423373 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.754118 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.754127 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.754134 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:02 crc kubenswrapper[4787]: E0127 07:52:02.754151 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:10.754146003 +0000 UTC m=+36.406501495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.804860 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.804897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.804908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.804926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.804938 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.908842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.908911 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.908927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.908952 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:02 crc kubenswrapper[4787]: I0127 07:52:02.908969 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:02Z","lastTransitionTime":"2026-01-27T07:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.012450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.012500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.012511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.012530 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.012542 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.041959 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:40:58.558283305 +0000 UTC Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.075705 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.075769 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.075705 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:03 crc kubenswrapper[4787]: E0127 07:52:03.075940 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:03 crc kubenswrapper[4787]: E0127 07:52:03.076112 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:03 crc kubenswrapper[4787]: E0127 07:52:03.076245 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.114760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.114802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.114812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.114827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.114837 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.218410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.218463 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.218474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.218495 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.218510 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.304835 4787 generic.go:334] "Generic (PLEG): container finished" podID="0ff88955-7cd9-4af4-9e0e-79614a1d2994" containerID="a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6" exitCode=0 Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.304913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerDied","Data":"a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.310218 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.311348 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rqwfc" event={"ID":"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3","Type":"ContainerStarted","Data":"f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.311372 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rqwfc" event={"ID":"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3","Type":"ContainerStarted","Data":"15ca82168208a3627bfee0d952e096bff064168ebc2b72d0463c309c4b7bd155"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.321041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.321066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.321075 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.321089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.321100 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.339108 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.355411 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.369867 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.383288 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.398786 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.412056 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.424253 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.425576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.425625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.425635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.425653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.425663 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.435617 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.445585 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.471107 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.491329 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.505600 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.526097 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.527765 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.527802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.527812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.527830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.527841 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.543794 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.561463 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.585886 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.608020 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.627652 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.630786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.630840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.630855 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.630876 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.630890 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.642786 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.658781 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.673188 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.688111 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.700082 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.710419 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.733884 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.733935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.733946 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.733968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.733981 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.754808 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.795897 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.829041 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.837343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.837390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.837405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.837434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.837449 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.871480 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.910399 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.940757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.940830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.940849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.940876 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.940896 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:03Z","lastTransitionTime":"2026-01-27T07:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:03 crc kubenswrapper[4787]: I0127 07:52:03.960382 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.042305 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:00:56.153231229 +0000 UTC Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.045900 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.045965 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.045977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.045995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.046012 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.149125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.149181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.149195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.149212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.149224 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.252890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.252976 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.252999 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.253028 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.253064 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.324083 4787 generic.go:334] "Generic (PLEG): container finished" podID="0ff88955-7cd9-4af4-9e0e-79614a1d2994" containerID="82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487" exitCode=0 Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.324137 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerDied","Data":"82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.345224 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.356970 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.357098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.357126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.357165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.357193 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.372656 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.390221 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.403400 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.419276 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.435407 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.447973 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.460119 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.460164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.460180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.460204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.460218 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.463690 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.477983 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.496089 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.521444 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.549328 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.562999 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.563062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.563114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.563148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.563170 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.567142 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.584734 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.600917 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.666472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.666520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.666532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.666575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.666589 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.770271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.770359 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.770371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.770388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.770400 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.851351 4787 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.879020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.879054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.879062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.879080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.879091 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.983351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.983402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.983414 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.983436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:04 crc kubenswrapper[4787]: I0127 07:52:04.983449 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:04Z","lastTransitionTime":"2026-01-27T07:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.042688 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:55:45.754012847 +0000 UTC Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.076805 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.076860 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:05 crc kubenswrapper[4787]: E0127 07:52:05.077025 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.077113 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:05 crc kubenswrapper[4787]: E0127 07:52:05.077285 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:05 crc kubenswrapper[4787]: E0127 07:52:05.077418 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.088769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.089179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.089189 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.089206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.089218 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.105451 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.122536 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.137800 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.153128 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.166740 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.180987 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.191326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.191668 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.191761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.191913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.192061 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.199635 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.214830 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.227496 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.243618 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.260726 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.282925 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.297660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.297743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.297754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.297772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.297782 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.297934 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.312647 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.328693 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.331649 4787 generic.go:334] "Generic (PLEG): container finished" podID="0ff88955-7cd9-4af4-9e0e-79614a1d2994" containerID="f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823" exitCode=0 Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.331778 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerDied","Data":"f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.345989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.347966 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.375601 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.390106 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.400685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.400730 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.400744 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.400764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.400776 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.410464 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.424849 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.439727 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.461810 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.476397 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.506002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.506054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.506069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.506088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.506100 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.508829 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.549839 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.590244 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.608641 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.608682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.608693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.608712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.608727 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.629609 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.669302 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.713692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.713726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.713736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.713755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.713766 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.714716 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.751266 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.817299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.817352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.817366 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.817385 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.817402 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.922350 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.922396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.922419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.922487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:05 crc kubenswrapper[4787]: I0127 07:52:05.922505 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:05Z","lastTransitionTime":"2026-01-27T07:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.025242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.025277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.025287 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.025301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.025311 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.043632 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:00:10.466147647 +0000 UTC Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.148685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.148721 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.148731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.148746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.148754 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.251296 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.251331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.251339 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.251352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.251361 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.353204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.353257 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.353272 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.353292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.353308 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.353793 4787 generic.go:334] "Generic (PLEG): container finished" podID="0ff88955-7cd9-4af4-9e0e-79614a1d2994" containerID="c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2" exitCode=0 Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.353918 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerDied","Data":"c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.375504 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.395675 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.413326 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.427394 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.440000 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.456245 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.456290 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.456299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.456313 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.456323 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.459171 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.473469 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.485670 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.506809 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.522937 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.539196 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.559867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.559991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.560010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.560033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.560046 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.561230 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.577422 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.590176 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.600099 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.663833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.664219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.664229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.664243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.664253 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.767685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.767746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.767768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.767793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.767811 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.870851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.870899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.870910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.870929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.870948 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.974531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.974593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.974604 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.974623 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:06 crc kubenswrapper[4787]: I0127 07:52:06.974634 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:06Z","lastTransitionTime":"2026-01-27T07:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.044824 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:42:28.099317954 +0000 UTC Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.075801 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.075874 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.075821 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:07 crc kubenswrapper[4787]: E0127 07:52:07.076050 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:07 crc kubenswrapper[4787]: E0127 07:52:07.076243 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:07 crc kubenswrapper[4787]: E0127 07:52:07.076393 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.078133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.078174 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.078185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.078204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.078217 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.189089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.189163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.189180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.189200 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.189211 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.292436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.292483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.292494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.292509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.292520 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.362123 4787 generic.go:334] "Generic (PLEG): container finished" podID="0ff88955-7cd9-4af4-9e0e-79614a1d2994" containerID="0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710" exitCode=0 Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.362201 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerDied","Data":"0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.370352 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.370856 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.371003 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.384876 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.395525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.395573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.395583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.395596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.395607 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.402456 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.423397 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.437817 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.437916 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.439204 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.451850 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.465102 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.488334 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.498578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.498639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.498655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.498681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.498695 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.504408 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.521003 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.546284 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.563773 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.580889 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.594132 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.602951 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.602998 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.603010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.603031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.603044 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.608365 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.619841 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.633570 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.660071 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.678065 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.695979 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.706034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.706089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.706101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.706124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.706137 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.712722 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.734390 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.762904 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.783194 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.795521 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.805944 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.809474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.809512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.809540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.809573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.809585 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.818141 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.833009 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.849091 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.861491 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.872409 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.913147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.913201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.913210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.913226 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:07 crc kubenswrapper[4787]: I0127 07:52:07.913238 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:07Z","lastTransitionTime":"2026-01-27T07:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.017257 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.017303 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.017312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.017326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.017336 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.045894 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:23:35.4983036 +0000 UTC Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.120534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.120620 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.120632 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.120653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.120669 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.223360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.223409 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.223422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.223441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.223453 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.326426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.326487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.326499 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.326518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.326530 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.378411 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" event={"ID":"0ff88955-7cd9-4af4-9e0e-79614a1d2994","Type":"ContainerStarted","Data":"47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.378529 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.392657 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.406723 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.419816 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.429700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.429831 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.429862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.429890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.429910 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.436868 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.455033 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.477061 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.496699 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.517886 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.532258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.532307 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.532321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.532343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.532358 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.534462 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.548507 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.567010 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.578606 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.589460 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.600818 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.616240 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.635673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.635747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.635766 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.635795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.635813 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.738960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.739028 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.739038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.739055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.739068 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.842268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.842334 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.842356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.842388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.842408 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.945939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.946002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.946016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.946042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:08 crc kubenswrapper[4787]: I0127 07:52:08.946058 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:08Z","lastTransitionTime":"2026-01-27T07:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.046577 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:30:44.757422428 +0000 UTC Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.049428 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.049478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.049495 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.049514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.049528 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.077314 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.077377 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.077328 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:09 crc kubenswrapper[4787]: E0127 07:52:09.077526 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:09 crc kubenswrapper[4787]: E0127 07:52:09.077640 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:09 crc kubenswrapper[4787]: E0127 07:52:09.077709 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.152458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.152504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.152514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.152533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.152544 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.255258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.255295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.255305 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.255319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.255327 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.357887 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.357939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.357949 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.357978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.357991 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.381684 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.461304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.461406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.461439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.461459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.461471 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.565911 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.565952 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.565961 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.565978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.565990 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.668483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.668527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.668539 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.668572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.668586 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.771847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.771895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.771909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.771927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.771942 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.875852 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.875982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.876002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.876067 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.876085 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.979707 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.979762 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.979780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.979807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:09 crc kubenswrapper[4787]: I0127 07:52:09.979826 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:09Z","lastTransitionTime":"2026-01-27T07:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.047334 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:24:10.22629506 +0000 UTC Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.082650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.082693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.082703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.082720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.082732 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.185114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.185147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.185155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.185175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.185189 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.288054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.288112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.288126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.288150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.288165 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.386210 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/0.log" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.389830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.389871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.389886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.389906 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.389920 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.391412 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5" exitCode=1 Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.391472 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.392195 4787 scope.go:117] "RemoveContainer" containerID="6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.404726 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.418039 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.435858 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.454154 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.468936 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.486677 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.506797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.506867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.506885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.506912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.506933 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.520044 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.568926 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.595574 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.609830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.609868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.609877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.609893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.609903 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.621480 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.643810 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.664214 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:10Z\\\",\\\"message\\\":\\\"dler 7\\\\nI0127 07:52:09.988526 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:52:09.988682 6153 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.988716 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.988908 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989091 6153 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989155 6153 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989230 6153 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989309 6153 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.677705 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.692606 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.703825 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.713080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.713161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.713174 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.713214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.713230 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.815828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.815886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.815900 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.815919 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.815932 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.850624 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.850764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.850796 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.850825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.850867 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:52:26.850831208 +0000 UTC m=+52.503186700 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.850938 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.850951 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.850963 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.850976 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.850985 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.850994 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.850998 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.851020 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.851054 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:26.851039453 +0000 UTC m=+52.503394945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.851074 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:26.851066753 +0000 UTC m=+52.503422245 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.851142 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:26.851111004 +0000 UTC m=+52.503466666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.851169 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:52:10 crc kubenswrapper[4787]: E0127 07:52:10.851308 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:26.851275477 +0000 UTC m=+52.503630999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.920178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.920276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.920294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.920336 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:10 crc kubenswrapper[4787]: I0127 07:52:10.920349 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:10Z","lastTransitionTime":"2026-01-27T07:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.023927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.024041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.024061 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.024091 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.024114 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.048293 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:45:39.377574685 +0000 UTC Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.076369 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.076463 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.076669 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.076699 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.076948 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.077155 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.127300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.127357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.127371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.127399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.127415 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.229929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.230003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.230023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.230051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.230071 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.256193 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.264938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.264983 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.265000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.265022 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.265040 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.282141 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.286822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.286917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.286945 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.286982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.287009 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.304483 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.312824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.312889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.312910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.312939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.312960 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.337274 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.341908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.341970 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.341983 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.342009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.342023 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.357891 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: E0127 07:52:11.358050 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.360328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.360373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.360386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.360408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.360420 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.397105 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/0.log" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.400052 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.400254 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.419037 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.432425 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.449366 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.463348 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.463408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.463425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.463452 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.463476 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.467451 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.494175 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.514728 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.536984 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:10Z\\\",\\\"message\\\":\\\"dler 7\\\\nI0127 07:52:09.988526 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:52:09.988682 6153 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.988716 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.988908 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989091 6153 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989155 6153 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989230 6153 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989309 6153 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.549935 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.562422 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.566589 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.566624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.566636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.566653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.566666 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.580731 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.598593 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.615622 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.631116 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.653429 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.668874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.668922 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.668934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.668948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.668959 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.673371 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.772283 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.772336 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.772348 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.772369 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.772383 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.876326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.876397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.876415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.876444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.876467 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.979371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.979947 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.980054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.980151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:11 crc kubenswrapper[4787]: I0127 07:52:11.980234 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:11Z","lastTransitionTime":"2026-01-27T07:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.049539 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:37:12.216947288 +0000 UTC Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.077026 4787 scope.go:117] "RemoveContainer" containerID="879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.083512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.083597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.083617 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.083640 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.083655 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.112960 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5"] Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.113820 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.116328 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.116798 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.130033 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.146887 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.166193 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.183620 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.186469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.186565 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.186577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.186592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.186603 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.200433 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.214920 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.229108 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.243228 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.268511 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/643aaef9-e302-436b-943e-940480ef74fc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.268678 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/643aaef9-e302-436b-943e-940480ef74fc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.268718 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q989\" (UniqueName: \"kubernetes.io/projected/643aaef9-e302-436b-943e-940480ef74fc-kube-api-access-8q989\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.268773 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/643aaef9-e302-436b-943e-940480ef74fc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.281767 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.289225 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.289285 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.289302 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.289648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.289667 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.305184 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.325291 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.344227 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.358504 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.369336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/643aaef9-e302-436b-943e-940480ef74fc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.369384 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q989\" (UniqueName: \"kubernetes.io/projected/643aaef9-e302-436b-943e-940480ef74fc-kube-api-access-8q989\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.369482 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/643aaef9-e302-436b-943e-940480ef74fc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.369523 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/643aaef9-e302-436b-943e-940480ef74fc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.370089 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/643aaef9-e302-436b-943e-940480ef74fc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.370497 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/643aaef9-e302-436b-943e-940480ef74fc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.376363 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/643aaef9-e302-436b-943e-940480ef74fc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.377911 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.386990 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q989\" (UniqueName: \"kubernetes.io/projected/643aaef9-e302-436b-943e-940480ef74fc-kube-api-access-8q989\") pod \"ovnkube-control-plane-749d76644c-vhss5\" (UID: \"643aaef9-e302-436b-943e-940480ef74fc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.393398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.393436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.393453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.393472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.393485 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.401478 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:10Z\\\",\\\"message\\\":\\\"dler 7\\\\nI0127 07:52:09.988526 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:52:09.988682 6153 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.988716 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.988908 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989091 6153 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989155 6153 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989230 6153 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989309 6153 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.404821 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/1.log" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.405294 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/0.log" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.407748 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995" exitCode=1 Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.407795 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.407842 4787 scope.go:117] "RemoveContainer" containerID="6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.408586 4787 scope.go:117] "RemoveContainer" containerID="023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995" Jan 27 07:52:12 crc kubenswrapper[4787]: E0127 07:52:12.408795 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.414143 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.433999 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:10Z\\\",\\\"message\\\":\\\"dler 7\\\\nI0127 07:52:09.988526 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:52:09.988682 6153 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.988716 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.988908 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989091 6153 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989155 6153 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989230 6153 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989309 6153 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.434444 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.446899 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.459709 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.470066 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.484626 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.496841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.496886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.496899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.496922 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.496937 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.497616 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.512155 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.526022 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.539590 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.553797 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.571147 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.593255 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.598974 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.599011 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.599024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.599045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.599061 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.608150 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.622404 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: W0127 07:52:12.638511 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod643aaef9_e302_436b_943e_940480ef74fc.slice/crio-13a1bc0eb4e46bce4afb0df853789c0fd5320c9f7e6ff4d9bbe7f61afa193a12 WatchSource:0}: Error finding container 13a1bc0eb4e46bce4afb0df853789c0fd5320c9f7e6ff4d9bbe7f61afa193a12: Status 404 returned error can't find the container with id 13a1bc0eb4e46bce4afb0df853789c0fd5320c9f7e6ff4d9bbe7f61afa193a12 Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.638997 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.658219 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:12Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.701575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.701622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.701632 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.701647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.701658 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.803936 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.803981 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.803991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.804009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.804019 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.907491 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.907562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.907575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.907598 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:12 crc kubenswrapper[4787]: I0127 07:52:12.907612 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:12Z","lastTransitionTime":"2026-01-27T07:52:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.010243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.010311 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.010326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.010350 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.010365 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.050618 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:59:32.079129351 +0000 UTC Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.076171 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.076270 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:13 crc kubenswrapper[4787]: E0127 07:52:13.076410 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.076444 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:13 crc kubenswrapper[4787]: E0127 07:52:13.076594 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:13 crc kubenswrapper[4787]: E0127 07:52:13.076766 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.113642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.113697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.113708 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.113726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.113738 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.217187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.217239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.217252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.217274 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.217288 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.296807 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.320595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.320645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.320662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.320688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.320703 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.416879 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.419723 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.420168 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.423042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.423066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.423082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.423098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.423108 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.425267 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" event={"ID":"643aaef9-e302-436b-943e-940480ef74fc","Type":"ContainerStarted","Data":"ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.425380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" event={"ID":"643aaef9-e302-436b-943e-940480ef74fc","Type":"ContainerStarted","Data":"5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.425409 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" event={"ID":"643aaef9-e302-436b-943e-940480ef74fc","Type":"ContainerStarted","Data":"13a1bc0eb4e46bce4afb0df853789c0fd5320c9f7e6ff4d9bbe7f61afa193a12"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.427700 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/1.log" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.432144 4787 scope.go:117] "RemoveContainer" containerID="023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995" Jan 27 07:52:13 crc kubenswrapper[4787]: E0127 07:52:13.432335 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.433917 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.448570 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.466047 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.481216 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.494406 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.505938 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.519175 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.525507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.525595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.525611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.525635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.525650 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.534656 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.549150 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.563382 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.576818 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.586044 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vws75"] Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.586956 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:13 crc kubenswrapper[4787]: E0127 07:52:13.587153 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.597598 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.620503 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.628611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.628677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.628691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.628714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.628730 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.638079 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.651905 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.675277 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc438285725e8717b0f840dbe429df899942e4ffb4df80a73d92b32cc4635b5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:10Z\\\",\\\"message\\\":\\\"dler 7\\\\nI0127 07:52:09.988526 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:52:09.988682 6153 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.988716 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.988908 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989091 6153 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989155 6153 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 07:52:09.989230 6153 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 07:52:09.989309 6153 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.684502 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.684588 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2mg\" (UniqueName: \"kubernetes.io/projected/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-kube-api-access-sq2mg\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.690904 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.706078 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.719981 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.732242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.732292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.732304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.732325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.732338 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.733856 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.747199 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.766722 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.780930 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.785215 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.785282 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2mg\" (UniqueName: \"kubernetes.io/projected/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-kube-api-access-sq2mg\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:13 crc kubenswrapper[4787]: E0127 07:52:13.785423 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:13 crc kubenswrapper[4787]: E0127 07:52:13.785499 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs podName:3969f21f-ab36-49b4-9a9c-02cf19e65ad0 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:14.285478108 +0000 UTC m=+39.937833600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs") pod "network-metrics-daemon-vws75" (UID: "3969f21f-ab36-49b4-9a9c-02cf19e65ad0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.794293 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.803731 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2mg\" (UniqueName: \"kubernetes.io/projected/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-kube-api-access-sq2mg\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.808765 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.830785 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.835774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.835810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.835822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.835841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.835856 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.847920 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.893980 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.918238 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.938958 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.939008 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.939020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.939041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.939054 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:13Z","lastTransitionTime":"2026-01-27T07:52:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.939280 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.950307 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.969342 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:13 crc kubenswrapper[4787]: I0127 07:52:13.982975 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.042008 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.042070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.042087 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.042112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.042131 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.051424 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:28:52.946399306 +0000 UTC Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.145527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.145613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.145633 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.145660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.145676 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.248798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.248881 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.248905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.248939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.248966 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.290761 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:14 crc kubenswrapper[4787]: E0127 07:52:14.290928 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:14 crc kubenswrapper[4787]: E0127 07:52:14.291009 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs podName:3969f21f-ab36-49b4-9a9c-02cf19e65ad0 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:15.290984458 +0000 UTC m=+40.943339950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs") pod "network-metrics-daemon-vws75" (UID: "3969f21f-ab36-49b4-9a9c-02cf19e65ad0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.352161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.352269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.352291 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.352320 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.352341 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.455760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.456402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.456419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.456449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.456466 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.559729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.559784 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.559794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.559816 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.559829 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.663456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.663500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.663510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.663527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.663540 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.766717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.766800 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.766817 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.766844 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.766860 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.870786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.870847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.870862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.870883 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.870897 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.973954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.974005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.974018 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.974039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:14 crc kubenswrapper[4787]: I0127 07:52:14.974051 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:14Z","lastTransitionTime":"2026-01-27T07:52:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.051746 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:24:59.099095412 +0000 UTC Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.075921 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.076044 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:15 crc kubenswrapper[4787]: E0127 07:52:15.076110 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.076213 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:15 crc kubenswrapper[4787]: E0127 07:52:15.076256 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.076345 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:15 crc kubenswrapper[4787]: E0127 07:52:15.076615 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:15 crc kubenswrapper[4787]: E0127 07:52:15.076787 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.077545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.077660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.077686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.077725 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.077753 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.096908 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.122999 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.136624 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.149639 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.166641 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.181784 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.181847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.181868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.181893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.181909 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.182405 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.199545 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.210303 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.225168 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.239502 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.256876 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.274321 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.286799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.286849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.286860 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.286882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.286897 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.287215 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.304686 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:15 crc kubenswrapper[4787]: E0127 07:52:15.304903 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:15 crc kubenswrapper[4787]: E0127 07:52:15.305039 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs podName:3969f21f-ab36-49b4-9a9c-02cf19e65ad0 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:17.30501564 +0000 UTC m=+42.957371132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs") pod "network-metrics-daemon-vws75" (UID: "3969f21f-ab36-49b4-9a9c-02cf19e65ad0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.307860 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.330819 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.346859 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.363586 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:15Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.390735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.390782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.390810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.390830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.390844 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.495148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.495227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.495242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.495267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.495301 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.598649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.598688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.598699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.598716 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.598730 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.701986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.702085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.702127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.702168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.702224 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.804706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.804764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.804791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.804815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.804866 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.908252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.908390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.908408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.908442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:15 crc kubenswrapper[4787]: I0127 07:52:15.908456 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:15Z","lastTransitionTime":"2026-01-27T07:52:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.012915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.012971 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.012984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.013012 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.013024 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.052195 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:35:37.804762973 +0000 UTC Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.116697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.116740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.116749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.116766 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.116778 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.220698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.220770 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.220783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.220803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.220816 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.323415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.323476 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.323489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.323508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.323522 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.425782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.425849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.425864 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.425885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.425900 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.528241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.528312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.528335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.528363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.528385 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.632019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.632113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.632141 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.632177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.632201 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.735084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.735147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.735161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.735183 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.735202 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.838121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.838159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.838170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.838186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.838197 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.941541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.941622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.941642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.941662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:16 crc kubenswrapper[4787]: I0127 07:52:16.941677 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:16Z","lastTransitionTime":"2026-01-27T07:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.044717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.044781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.044797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.044823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.044843 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.052490 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:19:06.219435008 +0000 UTC Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.077903 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.077935 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.077983 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.077991 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:17 crc kubenswrapper[4787]: E0127 07:52:17.078108 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:17 crc kubenswrapper[4787]: E0127 07:52:17.078299 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:17 crc kubenswrapper[4787]: E0127 07:52:17.078434 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:17 crc kubenswrapper[4787]: E0127 07:52:17.078533 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.147933 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.148011 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.148037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.148069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.148087 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.250766 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.250857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.250870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.250889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.250904 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.331335 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:17 crc kubenswrapper[4787]: E0127 07:52:17.331511 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:17 crc kubenswrapper[4787]: E0127 07:52:17.331650 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs podName:3969f21f-ab36-49b4-9a9c-02cf19e65ad0 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:21.331589051 +0000 UTC m=+46.983944553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs") pod "network-metrics-daemon-vws75" (UID: "3969f21f-ab36-49b4-9a9c-02cf19e65ad0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.354867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.354954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.354983 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.355017 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.355045 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.458246 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.458353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.458384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.458425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.458453 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.561201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.561246 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.561259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.561275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.561284 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.664128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.664192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.664202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.664220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.664240 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.768149 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.768215 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.768230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.768260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.768275 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.871414 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.871471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.871486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.871510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.871528 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.975310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.975373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.975395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.975417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:17 crc kubenswrapper[4787]: I0127 07:52:17.975430 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:17Z","lastTransitionTime":"2026-01-27T07:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.053606 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:25:27.308115905 +0000 UTC Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.078516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.078588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.078599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.078617 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.078630 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.181389 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.181450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.181459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.181507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.181519 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.285363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.285419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.285432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.285453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.285467 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.389243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.389326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.389346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.389378 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.389401 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.492777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.492852 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.492869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.492896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.492917 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.595771 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.595837 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.595862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.595896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.595920 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.699559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.699676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.699699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.699724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.699744 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.802453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.802552 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.802611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.802650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.802679 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.905704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.905763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.905772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.905790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:18 crc kubenswrapper[4787]: I0127 07:52:18.905831 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:18Z","lastTransitionTime":"2026-01-27T07:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.008935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.008998 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.009015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.009043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.009064 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.053863 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:34:52.818339788 +0000 UTC Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.076674 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:19 crc kubenswrapper[4787]: E0127 07:52:19.076856 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.076980 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.077058 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.077203 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:19 crc kubenswrapper[4787]: E0127 07:52:19.077276 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:19 crc kubenswrapper[4787]: E0127 07:52:19.077626 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:19 crc kubenswrapper[4787]: E0127 07:52:19.077847 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.113484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.113527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.113538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.113573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.113588 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.217137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.217200 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.217213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.217235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.217254 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.320331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.320417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.320444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.320483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.320508 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.424237 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.424322 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.424349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.424388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.424413 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.527319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.527377 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.527389 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.527411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.527424 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.630033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.630085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.630100 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.630122 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.630139 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.735000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.735057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.735069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.735089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.735101 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.837847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.837902 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.837915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.837934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.837950 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.942410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.942458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.942467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.942485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:19 crc kubenswrapper[4787]: I0127 07:52:19.942497 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:19Z","lastTransitionTime":"2026-01-27T07:52:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.044736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.044769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.044777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.044793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.044802 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.054529 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:57:16.484027287 +0000 UTC Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.148517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.148595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.148613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.148635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.148667 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.252420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.252507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.252527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.252596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.252618 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.355212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.355277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.355290 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.355309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.355321 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.458469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.458543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.458591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.458620 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.458639 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.561830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.561924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.561946 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.561976 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.562012 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.665852 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.666119 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.666138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.666157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.666178 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.769426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.769474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.769485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.769510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.769524 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.872869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.872938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.872956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.873002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.873021 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.977436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.977494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.977511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.977531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:20 crc kubenswrapper[4787]: I0127 07:52:20.977544 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:20Z","lastTransitionTime":"2026-01-27T07:52:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.055231 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:36:04.241787243 +0000 UTC Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.075807 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.075881 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.076526 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.075979 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.076733 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.076948 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.075907 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.077223 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.079776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.079812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.079823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.079843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.079855 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.182986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.183043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.183060 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.183081 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.183095 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.286863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.286947 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.286973 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.287004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.287026 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.387716 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.387918 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.388015 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs podName:3969f21f-ab36-49b4-9a9c-02cf19e65ad0 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:29.387986151 +0000 UTC m=+55.040341653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs") pod "network-metrics-daemon-vws75" (UID: "3969f21f-ab36-49b4-9a9c-02cf19e65ad0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.390344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.390398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.390411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.390432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.390447 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493120 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493129 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493922 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493952 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493970 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.493979 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.507166 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.513227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.513276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.513286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.513303 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.513314 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.526437 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.531869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.531913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.531922 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.531942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.531954 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.546909 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.551683 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.551742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.551755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.551780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.551795 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.570390 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.575836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.575899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.575921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.575946 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.575965 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.591024 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:21 crc kubenswrapper[4787]: E0127 07:52:21.591222 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.595826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.595885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.595897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.595916 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.595932 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.698544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.698622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.698637 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.698662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.698678 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.802137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.802197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.802211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.802236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.802256 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.906203 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.906270 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.906296 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.906332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:21 crc kubenswrapper[4787]: I0127 07:52:21.906356 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:21Z","lastTransitionTime":"2026-01-27T07:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.010030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.010135 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.010152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.010177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.010197 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.056074 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:56:32.839215637 +0000 UTC Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.113724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.113786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.113801 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.113826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.113842 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.217569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.217610 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.217621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.217639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.217651 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.321108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.321172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.321181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.321198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.321209 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.424735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.424836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.424879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.424913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.424935 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.528818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.528891 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.528912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.528939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.528953 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.632050 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.632105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.632115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.632134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.632147 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.736159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.736229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.736249 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.736276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.736297 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.840229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.840324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.840346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.840371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.840395 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.943347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.943423 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.943444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.943478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:22 crc kubenswrapper[4787]: I0127 07:52:22.943502 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:22Z","lastTransitionTime":"2026-01-27T07:52:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.046914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.046987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.047006 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.047035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.047056 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.057231 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:59:47.377984883 +0000 UTC Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.075725 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.075810 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.075875 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:23 crc kubenswrapper[4787]: E0127 07:52:23.075994 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.076104 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:23 crc kubenswrapper[4787]: E0127 07:52:23.076344 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:23 crc kubenswrapper[4787]: E0127 07:52:23.076535 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:23 crc kubenswrapper[4787]: E0127 07:52:23.081432 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.150434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.150489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.150503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.150527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.150542 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.253958 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.254029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.254042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.254062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.254076 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.356258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.356311 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.356322 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.356340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.356352 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.459710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.459797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.459816 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.459847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.459871 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.563494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.563645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.563682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.563720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.563745 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.667875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.667958 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.667977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.668006 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.668026 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.770855 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.770897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.770913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.770928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.770938 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.874386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.874480 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.874500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.874528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.874547 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.977459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.977853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.977943 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.978032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:23 crc kubenswrapper[4787]: I0127 07:52:23.978106 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:23Z","lastTransitionTime":"2026-01-27T07:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.057473 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:44:38.285442502 +0000 UTC Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.082645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.082698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.082710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.082729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.082744 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.185427 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.185514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.185530 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.185555 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.185591 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.288736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.288813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.288830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.288854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.288871 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.392031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.392126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.392151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.392187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.392212 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.494757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.494799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.494807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.494822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.494832 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.598030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.598097 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.598115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.598139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.598153 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.700863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.700921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.700934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.700953 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.700965 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.804279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.804333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.804343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.804362 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.804374 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.908494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.908537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.908550 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.908580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:24 crc kubenswrapper[4787]: I0127 07:52:24.908591 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:24Z","lastTransitionTime":"2026-01-27T07:52:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.011440 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.011888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.012029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.012161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.012286 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.058535 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:19:58.664419934 +0000 UTC Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.076355 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.076422 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:25 crc kubenswrapper[4787]: E0127 07:52:25.076624 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.076733 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:25 crc kubenswrapper[4787]: E0127 07:52:25.076782 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:25 crc kubenswrapper[4787]: E0127 07:52:25.076937 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.077250 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:25 crc kubenswrapper[4787]: E0127 07:52:25.077668 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.096348 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.115010 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.115741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.115879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.115965 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.116059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.116139 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.132006 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.144367 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.161230 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.177727 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.193401 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.210622 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.219084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.219133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.219145 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.219170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.219185 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.226331 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.242388 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.265208 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.288831 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.306352 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.322338 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.322816 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.322864 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.322877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.322904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.322920 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.337713 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.358753 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.373948 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:25Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.425458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.425512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.425525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.425569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.425585 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.528808 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.528875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.528893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.528915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.528929 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.632645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.632768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.632789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.632819 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.632841 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.736543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.736634 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.736647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.736663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.736674 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.839515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.839631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.839658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.839695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.839720 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.943542 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.943641 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.943663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.943693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:25 crc kubenswrapper[4787]: I0127 07:52:25.943713 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:25Z","lastTransitionTime":"2026-01-27T07:52:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.047288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.047376 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.047398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.047429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.047448 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.058852 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:51:59.391602252 +0000 UTC Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.149795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.149843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.149857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.149877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.149892 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.253369 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.253421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.253437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.253467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.253483 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.357873 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.357947 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.357969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.358004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.358028 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.461378 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.461454 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.461475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.461505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.461526 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.564185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.564246 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.564263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.564283 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.564302 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.667656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.667702 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.667715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.667733 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.667748 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.770428 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.770478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.770490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.770505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.770517 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.856352 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.856611 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:52:58.856580423 +0000 UTC m=+84.508935915 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.856697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.856759 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.856821 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.856866 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.856966 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857001 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.856999 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857008 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:58.856998381 +0000 UTC m=+84.509353873 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857059 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:58.857048902 +0000 UTC m=+84.509404394 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857063 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857097 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857196 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857274 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857299 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857231 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:58.857192565 +0000 UTC m=+84.509548087 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:26 crc kubenswrapper[4787]: E0127 07:52:26.857420 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:58.85737774 +0000 UTC m=+84.509733442 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.873267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.873329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.873340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.873357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.873369 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.977474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.977531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.977541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.977574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:26 crc kubenswrapper[4787]: I0127 07:52:26.977586 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:26Z","lastTransitionTime":"2026-01-27T07:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.059354 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:36:39.106737923 +0000 UTC Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.076280 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.076297 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.076457 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.077336 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:27 crc kubenswrapper[4787]: E0127 07:52:27.077628 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:27 crc kubenswrapper[4787]: E0127 07:52:27.077861 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:27 crc kubenswrapper[4787]: E0127 07:52:27.078132 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:27 crc kubenswrapper[4787]: E0127 07:52:27.078278 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.083489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.083529 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.083540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.083570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.083580 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.154601 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.171622 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.173049 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.187015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.187088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.187103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.187123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.187155 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.190350 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.206533 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.219272 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.231633 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.250311 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.276538 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.290159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.290218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.290231 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.290252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.290272 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.293267 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.312508 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.331683 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.351349 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.385442 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.393411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.393464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.393475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.393496 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.393507 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.398706 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.412849 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.437378 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.458295 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.476522 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:27Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.496043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.496107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.496125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.496150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.496170 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.599411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.599473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.599486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.599511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.599529 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.702397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.702458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.702476 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.702501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.702519 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.806043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.806113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.806139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.806172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.806192 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.909615 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.909691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.909701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.909738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:27 crc kubenswrapper[4787]: I0127 07:52:27.909749 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:27Z","lastTransitionTime":"2026-01-27T07:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.013692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.013824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.013845 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.013869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.013888 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.059725 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:48:35.479976623 +0000 UTC Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.117408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.117483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.117508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.117540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.117593 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.220095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.220139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.220151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.220168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.220180 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.323392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.324023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.324189 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.324352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.324498 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.428485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.428545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.428592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.428620 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.428643 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.532931 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.533067 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.533095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.533128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.533152 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.637214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.637289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.637301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.637323 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.637336 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.741765 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.741838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.741858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.741889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.741909 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.845701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.845783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.845806 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.845833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.845848 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.951225 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.951347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.951422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.951621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:28 crc kubenswrapper[4787]: I0127 07:52:28.951657 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:28Z","lastTransitionTime":"2026-01-27T07:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.056040 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.056129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.056156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.056187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.056209 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.060422 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:18:52.859487495 +0000 UTC Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.075647 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.075696 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.075742 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.075808 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:29 crc kubenswrapper[4787]: E0127 07:52:29.076172 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:29 crc kubenswrapper[4787]: E0127 07:52:29.076301 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:29 crc kubenswrapper[4787]: E0127 07:52:29.076400 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:29 crc kubenswrapper[4787]: E0127 07:52:29.077241 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.077957 4787 scope.go:117] "RemoveContainer" containerID="023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.085259 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.105596 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.126970 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.148845 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.161616 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.161698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.161750 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.161796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.161856 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.164894 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.184813 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.204035 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.242698 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.265941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.265993 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.266004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.266023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.266034 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.269774 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.288431 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.314414 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.333342 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.366796 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.370024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.370086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.370108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.370138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.370155 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.381327 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.393382 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.410123 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.427189 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.441825 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.453371 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.472268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.472308 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.472318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.472335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.472346 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.487263 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:29 crc kubenswrapper[4787]: E0127 07:52:29.487482 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:29 crc kubenswrapper[4787]: E0127 07:52:29.487617 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs podName:3969f21f-ab36-49b4-9a9c-02cf19e65ad0 nodeName:}" failed. No retries permitted until 2026-01-27 07:52:45.487580717 +0000 UTC m=+71.139936359 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs") pod "network-metrics-daemon-vws75" (UID: "3969f21f-ab36-49b4-9a9c-02cf19e65ad0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.502380 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/1.log" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.506948 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.507699 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.530579 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.546697 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.560664 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.574817 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.574889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.574904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.574926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.574964 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.577295 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.599542 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.631993 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.679059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.679104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.679116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.679135 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.679147 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.682482 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.699615 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.713937 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.737928 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.755834 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.770871 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.782188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.782259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.782274 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.782296 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.782313 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.785959 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.801826 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.818222 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.832932 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.846498 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.867544 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:29Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.884540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.884615 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.884626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.884645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.884659 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.987715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.987786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.987804 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.987825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:29 crc kubenswrapper[4787]: I0127 07:52:29.987841 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:29Z","lastTransitionTime":"2026-01-27T07:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.060883 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:13:45.013565453 +0000 UTC Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.091803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.091886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.091905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.091938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.091959 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.194197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.194240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.194248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.194266 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.194276 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.296441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.296481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.296499 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.296519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.296532 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.399315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.399391 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.399414 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.399448 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.399470 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.506799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.506857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.506868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.506888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.506903 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.512810 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/2.log" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.513455 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/1.log" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.516619 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f" exitCode=1 Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.516665 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.516722 4787 scope.go:117] "RemoveContainer" containerID="023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.517397 4787 scope.go:117] "RemoveContainer" containerID="adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f" Jan 27 07:52:30 crc kubenswrapper[4787]: E0127 07:52:30.517600 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.533618 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.555463 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.569974 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.584234 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.599170 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.610095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.610148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.610160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.610177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.610189 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.613699 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.625807 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.635596 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.645316 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.659955 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.678276 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.691391 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.709188 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.712287 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.712336 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.712346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.712366 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.712379 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.725440 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.745766 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.768661 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023dbda584b2664829844328d67f417bacaf8e4934d8ae66ea7b9928e95ac995\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"message\\\":\\\"80c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0127 07:52:11.779634 6307 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0127 07:52:11.779647 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network contro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.788845 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.804012 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.815432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.815486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.815498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.815517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.815531 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.917703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.917761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.917776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.917797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:30 crc kubenswrapper[4787]: I0127 07:52:30.917810 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:30Z","lastTransitionTime":"2026-01-27T07:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.021063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.021105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.021115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.021132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.021142 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.061856 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:15:51.867177862 +0000 UTC Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.076409 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.076514 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.076573 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.076736 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.076787 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.076888 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.077650 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.078140 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.123969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.124018 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.124034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.124064 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.124081 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.226510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.226588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.226605 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.226630 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.226647 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.329188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.329236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.329255 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.329274 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.329284 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.431376 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.431431 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.431454 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.431473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.431487 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.523686 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/2.log" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.528206 4787 scope.go:117] "RemoveContainer" containerID="adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f" Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.528403 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.534074 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.534126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.534143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.534165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.534179 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.545338 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.563035 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.585772 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.600625 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.615676 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.633670 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.637325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.637391 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.637401 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.637422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.637437 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.652813 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.665704 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.679044 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.700370 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.722539 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.738775 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.740824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.740887 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.740923 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.741020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.741065 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.760967 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.789971 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.801450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.801490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.801501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.801520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.801532 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.815880 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.820082 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.824244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.824314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.824328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.824368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.824381 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.837244 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.839448 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.841264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.841310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.841326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.841343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.841354 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.852265 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.853078 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.855963 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.856012 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.856021 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.856041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.856057 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.863549 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.869747 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.873994 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.874066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.874102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.874127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.874141 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.886695 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:31 crc kubenswrapper[4787]: E0127 07:52:31.886919 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.888888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.888933 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.888968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.888989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.888999 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.992219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.992266 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.992275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.992294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:31 crc kubenswrapper[4787]: I0127 07:52:31.992304 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:31Z","lastTransitionTime":"2026-01-27T07:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.062953 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:10:41.302902958 +0000 UTC Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.095663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.095740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.095764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.095800 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.095820 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.199030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.199092 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.199111 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.199140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.199160 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.303196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.303288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.303324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.303358 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.303382 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.406758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.406833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.406850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.406882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.406909 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.509594 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.509658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.509675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.509699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.509724 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.612487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.612537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.612574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.612598 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.612614 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.716735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.716803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.716823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.716852 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.716872 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.820779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.820850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.820869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.820901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.820921 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.924406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.924476 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.924490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.924511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:32 crc kubenswrapper[4787]: I0127 07:52:32.925323 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:32Z","lastTransitionTime":"2026-01-27T07:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.027921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.028002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.028024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.028049 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.028064 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.064988 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 06:50:55.644590615 +0000 UTC Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.076860 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.076943 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.076975 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.077002 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:33 crc kubenswrapper[4787]: E0127 07:52:33.077050 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:33 crc kubenswrapper[4787]: E0127 07:52:33.077303 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:33 crc kubenswrapper[4787]: E0127 07:52:33.077470 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:33 crc kubenswrapper[4787]: E0127 07:52:33.077600 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.131869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.131945 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.131966 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.131995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.132014 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.235237 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.235763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.235802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.235827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.235843 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.340118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.340176 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.340196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.340227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.340251 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.443346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.443406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.443425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.443455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.443474 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.547169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.547251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.547278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.547313 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.547339 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.650880 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.650946 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.650968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.650999 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.651021 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.755053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.755112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.755129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.755158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.755177 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.857986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.858031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.858045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.858070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.858086 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.962426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.962542 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.962609 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.962645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:33 crc kubenswrapper[4787]: I0127 07:52:33.962676 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:33Z","lastTransitionTime":"2026-01-27T07:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.065214 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:10:28.934992815 +0000 UTC Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.065577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.066023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.066103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.066190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.066326 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.169475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.169530 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.169541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.169576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.169588 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.273023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.273088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.273107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.273131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.273145 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.377590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.378027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.378162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.378296 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.378431 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.481650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.481742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.481756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.481784 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.481800 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.585896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.585968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.585987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.586024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.586050 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.689124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.689184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.689195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.689212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.689229 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.792510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.792598 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.792618 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.792645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.792658 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.895334 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.895425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.895450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.895485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.895510 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.998838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.998912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.998925 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.998944 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:34 crc kubenswrapper[4787]: I0127 07:52:34.998956 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:34Z","lastTransitionTime":"2026-01-27T07:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.066418 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:14:41.364249672 +0000 UTC Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.076175 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.076175 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.076185 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.076218 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:35 crc kubenswrapper[4787]: E0127 07:52:35.076494 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:35 crc kubenswrapper[4787]: E0127 07:52:35.076644 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:35 crc kubenswrapper[4787]: E0127 07:52:35.076916 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:35 crc kubenswrapper[4787]: E0127 07:52:35.077145 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.096609 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.101467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.101513 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.101544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.101918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.101961 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.115662 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.139875 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.155901 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.172988 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.189403 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.204359 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.204464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.204477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.204503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.204520 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.212001 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.226770 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.240485 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.254867 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.272649 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.287725 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.304853 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.307749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.307795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.307814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.307843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.307863 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.319103 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.331516 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.353267 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.372228 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.388820 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:35Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.411087 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.411149 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.411169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.411196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.411216 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.513859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.513889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.513897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.513912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.513924 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.617406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.617443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.617454 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.617469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.617478 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.720472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.720509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.720518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.720531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.720542 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.823420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.823456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.823464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.823478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.823487 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.926593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.926646 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.926659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.926676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:35 crc kubenswrapper[4787]: I0127 07:52:35.926688 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:35Z","lastTransitionTime":"2026-01-27T07:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.029980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.030026 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.030037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.030056 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.030071 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.066838 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:42:02.422486016 +0000 UTC Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.133137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.133188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.133202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.133239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.133253 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.236873 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.236919 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.236932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.236954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.236967 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.340128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.340186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.340198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.340222 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.340235 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.443198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.443281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.443317 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.443353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.443376 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.546739 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.546827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.546842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.546865 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.546906 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.649894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.650014 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.650065 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.650086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.650099 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.752626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.752734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.752758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.752819 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.752839 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.856456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.856502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.856512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.856532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.856543 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.960260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.960314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.960325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.960346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:36 crc kubenswrapper[4787]: I0127 07:52:36.960357 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:36Z","lastTransitionTime":"2026-01-27T07:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.063453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.063503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.063520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.063545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.063594 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.067018 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:03:35.95226279 +0000 UTC Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.076502 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.076524 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.076665 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:37 crc kubenswrapper[4787]: E0127 07:52:37.076744 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.076799 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:37 crc kubenswrapper[4787]: E0127 07:52:37.076989 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:37 crc kubenswrapper[4787]: E0127 07:52:37.077129 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:37 crc kubenswrapper[4787]: E0127 07:52:37.077322 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.167032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.167088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.167098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.167115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.167126 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.270879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.270953 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.270968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.270993 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.271008 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.375261 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.375319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.375333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.375354 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.375369 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.479447 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.479503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.479520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.479545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.479609 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.583547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.583655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.583675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.583705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.583731 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.687445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.687512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.687531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.687579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.687602 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.790736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.790814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.790829 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.790977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.791001 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.894059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.894130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.894146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.894172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.894189 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.997856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.997948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.997964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.997987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:37 crc kubenswrapper[4787]: I0127 07:52:37.998005 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:37Z","lastTransitionTime":"2026-01-27T07:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.067280 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:28:17.075609339 +0000 UTC Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.102484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.102630 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.102658 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.102695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.102720 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.206833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.206933 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.206959 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.206993 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.207019 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.310425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.310478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.310495 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.310519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.310537 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.423399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.423473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.423493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.423525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.423575 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.528037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.528120 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.528146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.528220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.528245 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.631030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.631098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.631116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.631146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.631168 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.735101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.735184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.735202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.735229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.735248 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.839159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.839215 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.839224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.839242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.839252 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.943298 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.943353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.943368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.943391 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:38 crc kubenswrapper[4787]: I0127 07:52:38.943405 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:38Z","lastTransitionTime":"2026-01-27T07:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.046620 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.046699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.046715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.046760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.046776 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.068416 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:05:16.130982658 +0000 UTC Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.076163 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.076228 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.076291 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.076234 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:39 crc kubenswrapper[4787]: E0127 07:52:39.076338 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:39 crc kubenswrapper[4787]: E0127 07:52:39.076547 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:39 crc kubenswrapper[4787]: E0127 07:52:39.076819 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:39 crc kubenswrapper[4787]: E0127 07:52:39.076897 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.150230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.150299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.150317 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.150345 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.150361 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.254784 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.254954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.254973 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.254998 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.255070 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.358717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.358784 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.358802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.358829 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.358848 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.462001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.462103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.462125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.462152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.462174 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.565936 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.566020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.566031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.566051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.566065 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.669259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.669371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.669391 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.669421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.669446 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.772644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.772679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.772688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.772705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.772716 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.875795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.875867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.875881 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.875906 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.875922 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.978643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.978695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.978705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.978723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:39 crc kubenswrapper[4787]: I0127 07:52:39.978733 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:39Z","lastTransitionTime":"2026-01-27T07:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.069687 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:26:10.97669824 +0000 UTC Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.081594 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.081649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.081661 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.081682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.081698 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.184115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.184181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.184193 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.184216 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.184237 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.286957 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.287004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.287015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.287034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.287055 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.390298 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.390351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.390364 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.390384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.390395 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.494281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.494332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.494343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.494360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.494370 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.597711 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.597775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.597792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.597820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.597837 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.700360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.700404 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.700415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.700431 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.700442 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.803398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.803453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.803467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.803482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.803492 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.907295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.907343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.907354 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.907371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:40 crc kubenswrapper[4787]: I0127 07:52:40.907382 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:40Z","lastTransitionTime":"2026-01-27T07:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.009502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.009568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.009581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.009599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.009612 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.070270 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:07:44.395275114 +0000 UTC Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.075512 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.075615 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.075626 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:41 crc kubenswrapper[4787]: E0127 07:52:41.075788 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.075818 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:41 crc kubenswrapper[4787]: E0127 07:52:41.075871 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:41 crc kubenswrapper[4787]: E0127 07:52:41.075666 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:41 crc kubenswrapper[4787]: E0127 07:52:41.075945 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.112240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.112284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.112297 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.112313 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.112326 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.215164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.215213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.215223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.215240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.215251 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.317488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.317528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.317538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.317583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.317596 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.419904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.419951 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.419961 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.419979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.419989 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.522832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.522885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.522894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.522911 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.522922 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.626278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.626331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.626344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.626363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.626375 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.729096 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.729179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.729198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.729218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.729229 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.833329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.833378 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.833390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.833408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.833420 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.936481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.936528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.936541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.936582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:41 crc kubenswrapper[4787]: I0127 07:52:41.936597 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:41Z","lastTransitionTime":"2026-01-27T07:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.039788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.039832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.039846 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.039863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.039881 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.071455 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:53:26.735688072 +0000 UTC Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.140157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.140208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.140218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.140236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.140249 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: E0127 07:52:42.154793 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.159163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.159198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.159208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.159220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.159230 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: E0127 07:52:42.176194 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.186625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.186699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.186712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.186736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.186750 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: E0127 07:52:42.206895 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.212222 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.212268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.212279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.212294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.212304 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: E0127 07:52:42.224455 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.228704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.228771 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.228787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.228814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.228829 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: E0127 07:52:42.242106 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:42 crc kubenswrapper[4787]: E0127 07:52:42.242359 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.245365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.245425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.245438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.245463 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.245491 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.347581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.347632 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.347643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.347662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.347678 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.451148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.451224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.451233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.451251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.451263 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.554239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.554298 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.554312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.554331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.554343 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.657632 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.657710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.657729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.657767 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.657787 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.760107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.760170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.760186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.760210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.760226 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.863138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.863196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.863206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.863225 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.863238 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.966148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.966210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.966231 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.966259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:42 crc kubenswrapper[4787]: I0127 07:52:42.966281 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:42Z","lastTransitionTime":"2026-01-27T07:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.069122 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.069173 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.069187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.069207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.069224 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.072281 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:16:31.972185356 +0000 UTC Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.075585 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.075644 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.075719 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:43 crc kubenswrapper[4787]: E0127 07:52:43.075828 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.075896 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:43 crc kubenswrapper[4787]: E0127 07:52:43.076126 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:43 crc kubenswrapper[4787]: E0127 07:52:43.076112 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:43 crc kubenswrapper[4787]: E0127 07:52:43.076304 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.173327 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.173368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.173381 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.173403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.173423 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.278066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.278104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.278116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.278138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.278152 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.382973 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.383024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.383036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.383058 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.383070 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.485843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.485878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.485889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.485908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.485923 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.588155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.588185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.588197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.588213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.588226 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.690988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.691040 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.691052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.691071 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.691084 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.794229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.794292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.794309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.794329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.794347 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.896450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.896485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.896495 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.896509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.896521 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.999167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.999211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.999221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.999240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:43 crc kubenswrapper[4787]: I0127 07:52:43.999251 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:43Z","lastTransitionTime":"2026-01-27T07:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.072928 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:03:45.890651277 +0000 UTC Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.102363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.102424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.102439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.102459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.102473 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.204890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.204950 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.204962 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.204984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.204997 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.307736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.307768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.307776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.307789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.307798 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.411623 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.411688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.411700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.411726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.411740 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.515789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.515867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.515877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.515899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.515917 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.618417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.618473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.618484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.618506 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.618518 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.721333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.721382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.721394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.721416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.721427 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.824505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.824595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.824609 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.824628 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.824641 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.927938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.927997 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.928010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.928032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:44 crc kubenswrapper[4787]: I0127 07:52:44.928045 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:44Z","lastTransitionTime":"2026-01-27T07:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.031439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.031505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.031525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.031581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.031604 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.074040 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:33:11.026921117 +0000 UTC Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.076289 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.076442 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.076455 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.076727 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:45 crc kubenswrapper[4787]: E0127 07:52:45.076745 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:45 crc kubenswrapper[4787]: E0127 07:52:45.076852 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:45 crc kubenswrapper[4787]: E0127 07:52:45.077236 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.077604 4787 scope.go:117] "RemoveContainer" containerID="adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f" Jan 27 07:52:45 crc kubenswrapper[4787]: E0127 07:52:45.077635 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:45 crc kubenswrapper[4787]: E0127 07:52:45.077871 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.144995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.145051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.145065 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.145088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.145101 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.148676 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.163998 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.174752 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.187699 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.202875 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.218230 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.233302 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.247667 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.247690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.247698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.247713 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.247722 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.248983 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.264633 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.278316 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.289400 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.298955 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.314946 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.337317 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.351136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.351215 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.351241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.351270 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.351290 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.356515 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.373000 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.386837 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.400987 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.453507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.453584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.453596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.453610 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.453618 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.556824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.556888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.556900 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.556917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.556928 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.584291 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:45 crc kubenswrapper[4787]: E0127 07:52:45.584482 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:45 crc kubenswrapper[4787]: E0127 07:52:45.584576 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs podName:3969f21f-ab36-49b4-9a9c-02cf19e65ad0 nodeName:}" failed. No retries permitted until 2026-01-27 07:53:17.584529838 +0000 UTC m=+103.236885330 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs") pod "network-metrics-daemon-vws75" (UID: "3969f21f-ab36-49b4-9a9c-02cf19e65ad0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.660085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.660152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.660172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.660198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.660218 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.762609 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.762724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.762743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.762764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.762778 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.865467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.865520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.865531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.865570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.865583 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.968123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.968168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.968180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.968197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:45 crc kubenswrapper[4787]: I0127 07:52:45.968207 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:45Z","lastTransitionTime":"2026-01-27T07:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.072053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.072106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.072118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.072138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.072149 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.074537 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:37:01.240537854 +0000 UTC Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.175657 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.175705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.175717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.175736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.175748 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.277907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.277949 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.277957 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.277973 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.277982 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.381698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.381741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.381754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.381775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.381788 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.483953 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.483994 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.484003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.484019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.484030 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.586004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.586036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.586046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.586059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.586070 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.689046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.689143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.689171 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.689208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.689241 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.792185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.792244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.792265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.792293 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.792313 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.896314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.896399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.896418 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.896461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:46 crc kubenswrapper[4787]: I0127 07:52:46.896478 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:46Z","lastTransitionTime":"2026-01-27T07:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.000023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.000076 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.000089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.000107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.000121 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.075124 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:15:50.691439916 +0000 UTC Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.076475 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.076530 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.076581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.076539 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:47 crc kubenswrapper[4787]: E0127 07:52:47.076664 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:47 crc kubenswrapper[4787]: E0127 07:52:47.076772 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:47 crc kubenswrapper[4787]: E0127 07:52:47.076827 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:47 crc kubenswrapper[4787]: E0127 07:52:47.076862 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.102595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.102664 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.102685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.102711 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.102732 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.207151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.207205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.207227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.207253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.207272 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.310341 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.310389 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.310399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.310417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.310427 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.413475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.413521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.413532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.413565 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.413582 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.516316 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.516366 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.516382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.516400 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.516410 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.587962 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/0.log" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.588019 4787 generic.go:334] "Generic (PLEG): container finished" podID="e6f78168-0b0d-464d-b1c7-00bb9a69c0d1" containerID="e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e" exitCode=1 Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.588048 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqjpz" event={"ID":"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1","Type":"ContainerDied","Data":"e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.588410 4787 scope.go:117] "RemoveContainer" containerID="e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.606840 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.620359 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.620146 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.620440 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.620456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.620479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.620500 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.632834 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.644440 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.661700 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.677170 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.696188 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.711905 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.723831 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.723861 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.723888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.723903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.723912 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.727890 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.741172 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.754105 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.766399 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.795069 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.811244 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.826022 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.826795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.826867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.826882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.826901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.826913 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.838468 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.851821 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:46Z\\\",\\\"message\\\":\\\"2026-01-27T07:52:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206\\\\n2026-01-27T07:52:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206 to /host/opt/cni/bin/\\\\n2026-01-27T07:52:01Z [verbose] multus-daemon started\\\\n2026-01-27T07:52:01Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:52:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.869035 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.930516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.930581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.930593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.930611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:47 crc kubenswrapper[4787]: I0127 07:52:47.930622 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:47Z","lastTransitionTime":"2026-01-27T07:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.033745 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.033793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.033809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.033829 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.033843 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.075656 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:43:51.328486492 +0000 UTC Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.136848 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.136904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.136913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.136930 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.136940 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.240272 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.240309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.240321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.240340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.240354 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.343138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.343179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.343188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.343201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.343212 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.445718 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.445792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.445801 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.445820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.445831 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.548812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.548883 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.548898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.548923 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.548938 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.593776 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/0.log" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.593845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqjpz" event={"ID":"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1","Type":"ContainerStarted","Data":"4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.608697 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.629114 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.644264 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.653769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.653813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.653825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.653843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.653855 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.656739 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.670909 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.684961 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.696785 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.707114 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.717810 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.729251 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.741991 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.755265 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.756332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.756417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.756431 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.756447 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.756806 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.769726 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.783844 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:46Z\\\",\\\"message\\\":\\\"2026-01-27T07:52:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206\\\\n2026-01-27T07:52:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206 to /host/opt/cni/bin/\\\\n2026-01-27T07:52:01Z [verbose] multus-daemon started\\\\n2026-01-27T07:52:01Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:52:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.807198 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.825408 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.839443 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.852876 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.860068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.860136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.860149 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.860169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.860186 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.963781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.964263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.964422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.964634 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:48 crc kubenswrapper[4787]: I0127 07:52:48.964782 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:48Z","lastTransitionTime":"2026-01-27T07:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.068129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.068176 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.068187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.068209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.068222 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.075907 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:45:38.028803353 +0000 UTC Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.076032 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.076063 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.076143 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:49 crc kubenswrapper[4787]: E0127 07:52:49.076171 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:49 crc kubenswrapper[4787]: E0127 07:52:49.076308 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:49 crc kubenswrapper[4787]: E0127 07:52:49.076394 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.077187 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:49 crc kubenswrapper[4787]: E0127 07:52:49.077368 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.170927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.170973 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.170986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.171004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.171015 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.274351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.274403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.274417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.274435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.274477 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.376824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.376889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.376909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.376938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.376964 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.480885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.480960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.480971 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.480991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.481002 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.583322 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.583364 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.583373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.583390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.583400 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.686974 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.687050 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.687083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.687119 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.687149 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.791271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.791324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.791333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.791351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.791363 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.894384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.894500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.894536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.894610 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.894642 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.998580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.998636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.998648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.998666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:49 crc kubenswrapper[4787]: I0127 07:52:49.998678 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:49Z","lastTransitionTime":"2026-01-27T07:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.076840 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:13:37.514423681 +0000 UTC Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.102356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.102422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.102439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.102462 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.102476 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.205477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.205541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.205585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.205614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.205634 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.309033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.309094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.309107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.309126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.309137 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.412201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.412258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.412273 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.412294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.412308 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.516101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.516190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.516214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.516244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.516262 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.618455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.618503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.618516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.618534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.618544 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.721381 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.721450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.721462 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.721485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.721498 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.824080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.824159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.824179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.824209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.824231 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.927078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.927150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.927167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.927194 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:50 crc kubenswrapper[4787]: I0127 07:52:50.927215 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:50Z","lastTransitionTime":"2026-01-27T07:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.031213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.031290 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.031312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.031344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.031370 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.076493 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.076493 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:51 crc kubenswrapper[4787]: E0127 07:52:51.076702 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:51 crc kubenswrapper[4787]: E0127 07:52:51.076721 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.076536 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.076513 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:51 crc kubenswrapper[4787]: E0127 07:52:51.076791 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.077000 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:06:19.95521626 +0000 UTC Jan 27 07:52:51 crc kubenswrapper[4787]: E0127 07:52:51.077007 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.134275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.134352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.134368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.134398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.134427 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.237219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.237280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.237292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.237312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.237325 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.340612 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.340659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.340669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.340685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.340694 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.445789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.445868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.445885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.445915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.445934 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.549406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.549450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.549458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.549474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.549492 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.652080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.652130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.652142 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.652157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.652169 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.756048 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.756169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.756197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.756232 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.756259 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.859025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.859068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.859079 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.859094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.859104 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.961283 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.961321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.961330 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.961343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:51 crc kubenswrapper[4787]: I0127 07:52:51.961353 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:51Z","lastTransitionTime":"2026-01-27T07:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.063478 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.063517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.063528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.063544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.063569 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.077133 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:28:57.306933215 +0000 UTC Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.165455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.165494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.165503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.165519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.165532 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.269517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.269651 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.269679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.269720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.269751 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.372164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.372206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.372218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.372235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.372247 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.459129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.459190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.459201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.459218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.459228 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: E0127 07:52:52.473523 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.478471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.478512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.478523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.478539 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.478566 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: E0127 07:52:52.491254 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.496580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.496617 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.496627 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.496644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.496656 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: E0127 07:52:52.511405 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.516254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.516301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.516317 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.516336 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.516351 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: E0127 07:52:52.529722 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.533883 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.533911 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.533921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.533935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.533945 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: E0127 07:52:52.546324 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:52 crc kubenswrapper[4787]: E0127 07:52:52.546476 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.548082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.548125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.548137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.548158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.548168 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.650828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.650878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.650898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.650918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.650958 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.753985 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.754107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.754130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.754160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.754180 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.857756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.857811 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.857830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.857849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.857862 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.961613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.961986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.962089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.962201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:52 crc kubenswrapper[4787]: I0127 07:52:52.962317 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:52Z","lastTransitionTime":"2026-01-27T07:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.066127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.066200 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.066214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.066234 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.066248 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.075844 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.075924 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:53 crc kubenswrapper[4787]: E0127 07:52:53.076198 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.075960 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:53 crc kubenswrapper[4787]: E0127 07:52:53.076650 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.075925 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:53 crc kubenswrapper[4787]: E0127 07:52:53.076373 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:53 crc kubenswrapper[4787]: E0127 07:52:53.077086 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.077965 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:11:27.522865353 +0000 UTC Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.168960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.169500 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.169599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.169674 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.169745 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.273970 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.274373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.274483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.274621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.274727 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.377783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.377880 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.377901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.377932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.377953 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.481599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.481984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.482102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.482222 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.482321 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.585523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.585648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.585670 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.585697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.585717 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.689521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.689614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.689637 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.689666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.689694 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.793806 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.793924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.793962 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.794002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.794028 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.896951 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.897019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.897041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.897069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:53 crc kubenswrapper[4787]: I0127 07:52:53.897089 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:53Z","lastTransitionTime":"2026-01-27T07:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.000134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.000501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.000599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.000679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.000765 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.079070 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:06:46.169117937 +0000 UTC Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.104344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.104665 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.104838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.104988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.105276 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.208250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.208823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.209070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.209295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.209673 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.313823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.313903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.313921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.313948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.313976 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.416649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.416717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.416737 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.416763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.416782 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.520132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.520190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.520203 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.520229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.520243 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.623382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.623429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.623442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.623461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.623477 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.727729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.727791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.727810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.727836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.727856 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.831196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.831277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.831298 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.831328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.831349 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.934912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.935210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.935314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.935401 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:54 crc kubenswrapper[4787]: I0127 07:52:54.935481 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:54Z","lastTransitionTime":"2026-01-27T07:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.039415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.039460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.039472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.039491 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.039507 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.076615 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.076629 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.076783 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:55 crc kubenswrapper[4787]: E0127 07:52:55.076813 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:55 crc kubenswrapper[4787]: E0127 07:52:55.076989 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:55 crc kubenswrapper[4787]: E0127 07:52:55.077204 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.077384 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:55 crc kubenswrapper[4787]: E0127 07:52:55.077665 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.079884 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:45:57.45751022 +0000 UTC Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.095830 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.117645 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:46Z\\\",\\\"message\\\":\\\"2026-01-27T07:52:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206\\\\n2026-01-27T07:52:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206 to /host/opt/cni/bin/\\\\n2026-01-27T07:52:01Z [verbose] multus-daemon started\\\\n2026-01-27T07:52:01Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:52:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.141369 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.155454 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.156006 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.156186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.156355 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.156483 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.187428 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.213849 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.237800 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.256157 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.260717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.260790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.260812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.260840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.260860 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.285296 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.300301 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.321145 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.340243 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.361326 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.363633 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.363674 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.363686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.363709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.363725 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.383630 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.399012 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.414656 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.431048 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.456782 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.467684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.467729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.467740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.467756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.467770 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.473089 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.570600 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.570650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.570659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.570677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.570687 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.673429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.673480 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.673496 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.673516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.673534 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.776768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.776826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.776841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.776864 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.776887 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.880190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.880254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.880267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.880290 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.880307 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.983443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.983502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.983521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.983581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:55 crc kubenswrapper[4787]: I0127 07:52:55.983602 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:55Z","lastTransitionTime":"2026-01-27T07:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.076763 4787 scope.go:117] "RemoveContainer" containerID="adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.080516 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:41:50.553613017 +0000 UTC Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.086301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.086338 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.086347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.086364 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.086376 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.189137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.189185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.189196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.189213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.189226 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.293787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.293842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.293853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.293875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.293889 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.397995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.398061 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.398075 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.398098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.398111 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.500633 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.500687 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.500697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.500714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.500726 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.603692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.603740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.603753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.603772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.603786 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.622591 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/2.log" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.625815 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.626335 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.648057 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.662258 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.676416 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.690415 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.705667 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:46Z\\\",\\\"message\\\":\\\"2026-01-27T07:52:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206\\\\n2026-01-27T07:52:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206 to /host/opt/cni/bin/\\\\n2026-01-27T07:52:01Z [verbose] multus-daemon started\\\\n2026-01-27T07:52:01Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:52:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.706639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.706701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.706714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.706734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.706744 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.724048 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.744715 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.759576 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.786708 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.801026 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.808917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.808951 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.808962 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.808980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.808995 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.815399 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.827564 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.839754 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.854708 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.868527 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.882714 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.895385 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.905857 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.911576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.911629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.911642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.911661 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:56 crc kubenswrapper[4787]: I0127 07:52:56.911673 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:56Z","lastTransitionTime":"2026-01-27T07:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.015026 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.015075 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.015086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.015104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.015115 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.076751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.076775 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.076823 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.076898 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:57 crc kubenswrapper[4787]: E0127 07:52:57.076931 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:57 crc kubenswrapper[4787]: E0127 07:52:57.076963 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:57 crc kubenswrapper[4787]: E0127 07:52:57.077146 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:57 crc kubenswrapper[4787]: E0127 07:52:57.077245 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.081615 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:32:18.268703556 +0000 UTC Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.117465 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.117525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.117599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.117631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.117652 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.221579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.221652 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.221673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.221708 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.221734 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.325691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.325772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.325793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.325825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.325845 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.429536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.429614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.429636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.429662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.429680 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.532737 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.532804 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.532818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.532839 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.532852 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.633138 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/3.log" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.633959 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/2.log" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.635312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.635423 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.635444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.635467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.635513 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.638047 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" exitCode=1 Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.638136 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.638214 4787 scope.go:117] "RemoveContainer" containerID="adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.639219 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 07:52:57 crc kubenswrapper[4787]: E0127 07:52:57.639491 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.664115 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.683817 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.700727 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.713874 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.724087 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.746565 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.746614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.746624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.746641 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.746652 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.749615 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.763866 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.777525 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.793537 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.810476 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:46Z\\\",\\\"message\\\":\\\"2026-01-27T07:52:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206\\\\n2026-01-27T07:52:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206 to /host/opt/cni/bin/\\\\n2026-01-27T07:52:01Z [verbose] multus-daemon started\\\\n2026-01-27T07:52:01Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:52:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.826772 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.849611 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adb98705453ca5a2191ab57c0255b907d1e1d505962fddc38dd440a1d56d505f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:30Z\\\",\\\"message\\\":\\\" 6533 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0127 07:52:30.089457 6533 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:52:30.089470 6533 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-rqwfc\\\\nI0127 07:52:30.089462 6533 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0127 07:52:30.089487 6533 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:57Z\\\",\\\"message\\\":\\\"57 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0127 07:52:57.037280 6957 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI0127 07:52:57.037300 6957 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}\\\\nI0127 07:52:57.037235 6957 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}\\\\nI0127 07:52:57.037326 6957 services_controller.go:360] Finished syncing service cluster-autoscaler-operator on namespace openshift-machine-api for network=default : 4.740846ms\\\\nI0127 07:52:57.037332 6957 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 1.998381ms\\\\nI0127 07:52:57.037373 6957 services_controller.go:356] Processing sync for service openshift-ingress-canary/ingress-canary for network=default\\\\nF0127 07:52:57.037046 6957 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e6002\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.850063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.850158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.850188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.850217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.850237 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.866034 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.884095 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.901040 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.917607 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.932014 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.948727 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.954176 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.954241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.954261 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.954284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:57 crc kubenswrapper[4787]: I0127 07:52:57.954301 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:57Z","lastTransitionTime":"2026-01-27T07:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.058043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.058124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.058144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.058179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.058199 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.081847 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:47:40.592824393 +0000 UTC Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.160523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.160584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.160598 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.160622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.160639 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.263896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.263954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.263969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.263988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.264003 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.367830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.367884 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.367900 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.367923 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.367937 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.470768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.470851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.470875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.470912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.470932 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.574940 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.574984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.574999 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.575017 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.575027 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.643622 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/3.log" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.648069 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.648263 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.667903 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.678292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.678327 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.678339 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.678362 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.678375 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.685219 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.698893 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.710325 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.721680 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.734130 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.746765 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.758436 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.768408 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.782472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.782522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.782537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.782577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.782593 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.787629 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:46Z\\\",\\\"message\\\":\\\"2026-01-27T07:52:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206\\\\n2026-01-27T07:52:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206 to /host/opt/cni/bin/\\\\n2026-01-27T07:52:01Z [verbose] multus-daemon started\\\\n2026-01-27T07:52:01Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:52:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.811411 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.844908 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.866457 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.886144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.886195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.886209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.886235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.886250 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.888075 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.905390 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.939460 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:57Z\\\",\\\"message\\\":\\\"57 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0127 07:52:57.037280 6957 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI0127 07:52:57.037300 6957 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}\\\\nI0127 07:52:57.037235 6957 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}\\\\nI0127 07:52:57.037326 6957 services_controller.go:360] Finished syncing service cluster-autoscaler-operator on namespace openshift-machine-api for network=default : 4.740846ms\\\\nI0127 07:52:57.037332 6957 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 1.998381ms\\\\nI0127 07:52:57.037373 6957 services_controller.go:356] Processing sync for service openshift-ingress-canary/ingress-canary for network=default\\\\nF0127 07:52:57.037046 6957 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e6002\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.947769 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.947926 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.947962 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948036 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948062 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.948014157 +0000 UTC m=+148.600369649 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948110 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.948099399 +0000 UTC m=+148.600454891 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.948292 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948325 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948341 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.948346 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948354 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948445 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948398 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948533 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948578 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948491 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.948471267 +0000 UTC m=+148.600826759 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948628 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.94861979 +0000 UTC m=+148.600975282 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:58 crc kubenswrapper[4787]: E0127 07:52:58.948641 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.94863446 +0000 UTC m=+148.600989952 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.957258 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.973130 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:52:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.990384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.990453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.990471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.990501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:58 crc kubenswrapper[4787]: I0127 07:52:58.990524 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:58Z","lastTransitionTime":"2026-01-27T07:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.076134 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.076211 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:52:59 crc kubenswrapper[4787]: E0127 07:52:59.076282 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.076227 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.076227 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:52:59 crc kubenswrapper[4787]: E0127 07:52:59.076420 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:52:59 crc kubenswrapper[4787]: E0127 07:52:59.076638 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:52:59 crc kubenswrapper[4787]: E0127 07:52:59.076784 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.082856 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:30:41.728034698 +0000 UTC Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.093724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.093764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.093777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.093796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.093812 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.197927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.197991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.198009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.198034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.198052 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.301056 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.301112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.301128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.301153 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.301171 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.404203 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.404281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.404309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.404339 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.404362 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.507721 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.507798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.507877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.507981 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.508006 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.611631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.611700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.611720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.611750 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.611769 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.715395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.715479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.715504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.715538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.715615 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.818908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.818948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.818959 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.818978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.818991 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.922779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.922857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.922874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.922897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:52:59 crc kubenswrapper[4787]: I0127 07:52:59.922913 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:52:59Z","lastTransitionTime":"2026-01-27T07:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.026859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.026918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.026937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.026965 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.026989 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.083250 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:00:59.200373332 +0000 UTC Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.092829 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.131476 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.131539 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.131573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.131601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.131620 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.234984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.235046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.235059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.235078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.235089 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.338509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.338578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.338590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.338610 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.338624 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.442048 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.442131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.442152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.442179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.442199 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.545547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.545664 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.545691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.545719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.545744 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.649601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.649678 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.649700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.649726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.649744 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.752813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.752864 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.752877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.752895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.752907 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.857035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.857118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.857139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.857170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.857192 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.961366 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.961432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.961450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.961481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:00 crc kubenswrapper[4787]: I0127 07:53:00.961506 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:00Z","lastTransitionTime":"2026-01-27T07:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.065298 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.065373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.065390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.065417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.065436 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.075782 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.075931 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:01 crc kubenswrapper[4787]: E0127 07:53:01.076009 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.076071 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.076079 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:01 crc kubenswrapper[4787]: E0127 07:53:01.076295 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:01 crc kubenswrapper[4787]: E0127 07:53:01.076411 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:01 crc kubenswrapper[4787]: E0127 07:53:01.076625 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.084339 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:48:11.899995964 +0000 UTC Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.168894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.168955 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.168975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.169000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.169018 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.272818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.272877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.272889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.272910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.272923 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.376329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.376739 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.376949 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.377110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.377285 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.480373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.480442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.480465 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.480497 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.480517 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.583805 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.583885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.583957 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.583993 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.584011 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.688078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.688163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.688182 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.688212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.688231 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.791118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.791517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.791615 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.791690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.791765 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.895394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.895477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.895497 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.895531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.895600 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.998036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.998083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.998095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.998114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:01 crc kubenswrapper[4787]: I0127 07:53:01.998127 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:01Z","lastTransitionTime":"2026-01-27T07:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.084524 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:13:12.359759477 +0000 UTC Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.101083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.101128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.101140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.101157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.101167 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.203201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.203251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.203267 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.203286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.203303 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.307448 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.307505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.307519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.307540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.307599 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.409956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.410001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.410011 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.410029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.410041 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.513415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.513481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.513504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.513525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.513537 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.617432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.617524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.617585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.617619 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.617638 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.721469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.721584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.721614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.721649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.721676 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.781509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.781625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.781649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.781681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.781704 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: E0127 07:53:02.800641 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.805840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.805918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.805938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.805972 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.805998 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: E0127 07:53:02.825494 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.831118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.831305 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.831393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.831510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.831627 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: E0127 07:53:02.855377 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.859492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.859593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.859612 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.859640 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.859657 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: E0127 07:53:02.874666 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.881759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.881837 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.881852 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.881873 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.881887 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:02 crc kubenswrapper[4787]: E0127 07:53:02.896503 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:02 crc kubenswrapper[4787]: E0127 07:53:02.896724 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.898524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.898571 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.898592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.898617 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:02 crc kubenswrapper[4787]: I0127 07:53:02.898627 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:02Z","lastTransitionTime":"2026-01-27T07:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.001101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.001180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.001195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.001217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.001234 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.075832 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.075832 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.076009 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:03 crc kubenswrapper[4787]: E0127 07:53:03.076170 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:03 crc kubenswrapper[4787]: E0127 07:53:03.076489 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:03 crc kubenswrapper[4787]: E0127 07:53:03.076534 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.076301 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:03 crc kubenswrapper[4787]: E0127 07:53:03.076727 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.085576 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:06:46.900548752 +0000 UTC Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.104001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.104041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.104070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.104091 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.104103 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.207151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.207230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.207248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.207278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.207298 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.310978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.311790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.312113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.312227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.312312 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.415858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.415928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.415948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.415977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.416001 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.520540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.520824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.520906 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.520982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.521046 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.623984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.624351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.624417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.624508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.624636 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.727379 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.727450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.727475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.727507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.727530 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.830738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.830870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.830899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.830924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.830944 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.933511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.933684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.933719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.933752 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:03 crc kubenswrapper[4787]: I0127 07:53:03.933776 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:03Z","lastTransitionTime":"2026-01-27T07:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.042506 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.042627 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.042659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.042686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.042702 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.086491 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:55:36.020738257 +0000 UTC Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.146633 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.146719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.146748 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.146783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.146807 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.250722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.250798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.250816 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.250845 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.250864 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.354971 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.355026 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.355042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.355062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.355075 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.459271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.459374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.459402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.459434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.459460 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.562835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.562924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.562945 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.562975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.562997 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.666934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.666979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.666992 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.667008 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.667019 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.770366 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.770445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.770469 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.770498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.770519 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.873428 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.873523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.873543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.873611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.873639 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.977096 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.977139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.977148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.977165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:04 crc kubenswrapper[4787]: I0127 07:53:04.977177 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:04Z","lastTransitionTime":"2026-01-27T07:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.076400 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.076492 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.076428 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:05 crc kubenswrapper[4787]: E0127 07:53:05.076750 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:05 crc kubenswrapper[4787]: E0127 07:53:05.077086 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:05 crc kubenswrapper[4787]: E0127 07:53:05.077214 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.077242 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:05 crc kubenswrapper[4787]: E0127 07:53:05.077434 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.080343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.080485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.080629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.080723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.080829 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.087338 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:14:26.447094203 +0000 UTC Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.098148 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f051e184-acac-47cf-9e04-9df648288715\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf36b99a3389bdbbca8142539870671f6be61df22503df198f6255937e619950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvg7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4fh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.113586 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d28aae4-3359-4d7e-8862-8db4b17a3403\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d820bf9fe1c788b0bb17535226d96cd310fc188148091fa8b4186dd43baa5f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d407da23fc5babf6cec063fb31861fd8ac0f456a44746a83740c61c1e53a436c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0509b185e88f56d56bc6f8429a620d74855d938f1206d6c9193d8e93fc12ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec120e647d27b70e3066a00046d12258db3162ab9beef75b4079d546c2533aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.131126 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2974b5846d4b9e95a3eab849f160c5b33393ff6b1bf1275bc6476ec80807d632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4426d84c4375c7c659c77049bbd6a720bf004fa5e7780b1e8739a2620c3667f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.146180 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5a64cdfd5f8aff93294f92aebc9ccf8fa2a4063e347fce2e35604d27651d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.161434 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fghc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cac1bb0-6b6f-442f-9db4-a25d4f194bb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d098d784c9e43aa1c5b9014dd290f1a23ddf0651dfd603c634682a8f05da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8zvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fghc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.174615 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rqwfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe88a860-6bb6-40ef-8ed7-c16f06fad8d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0dc7c2423b05f4eb857a5a6d8a5006cea055965d20a85e84659b369f9659eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scmbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rqwfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.184032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.184114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.184128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.184150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.184169 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.192924 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bef4a57-904d-47b1-baa1-224c5c9f2b1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c0fb5499a4bed619d003da60f83076e413fc5bda47a12d42770f567deeeec8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c470d3edd878f69c48f1457780557dd56300c0fe69342c479f1922dca856bceb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba5d04b33d62090867842211184f53bc23cf78b90a4c6709792639b74d1cf0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.215063 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60eef58e-b2eb-43d9-a499-317083a89ca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 07:51:54.984485 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 07:51:54.984680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:51:54.985504 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2155430209/tls.crt::/tmp/serving-cert-2155430209/tls.key\\\\\\\"\\\\nI0127 07:51:55.207010 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:51:55.214027 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:51:55.214056 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:51:55.214085 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:51:55.214092 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:51:55.221404 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:51:55.221608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221697 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:51:55.221768 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:51:55.221825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:51:55.221877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:51:55.221926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:51:55.221414 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:51:55.222961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.230761 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.250010 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.266358 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rqjpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:46Z\\\",\\\"message\\\":\\\"2026-01-27T07:52:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206\\\\n2026-01-27T07:52:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3d832606-6840-4320-b93f-1be93a343206 to /host/opt/cni/bin/\\\\n2026-01-27T07:52:01Z [verbose] multus-daemon started\\\\n2026-01-27T07:52:01Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:52:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tz7b2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rqjpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.284209 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff88955-7cd9-4af4-9e0e-79614a1d2994\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47c65fabdecb2c24a7eeabdff7a89339f711ff64e9cb6516900be57a2753f89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7833b50fe96981aa0ed63aa03b7078def1c809e60092a901a8b7ebaee78ad8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b5a15ba402c52fc6f3cd5d882f597c90ea8d1ab4e836f0e0998a301126f5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb64425ded7611030cccfa7a439b14e545bf83bc48c762df144eddc7d08487\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9782b2a339d2dba1498f28ed3f7e6c4c2e747a282465311592f0fdb5863f823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8924a3ef645447151d91c506d105c9760f0dce0c568e9bf37ea76108a6352b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0da2ec606517cc9e0c2ed1d549c947db2eecb603f9bc9ba8bddbee0aad4710\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqb6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.287301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.287346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.287383 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.287408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.287424 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.310873 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7279787f-4f29-4eae-a213-b7ea5d579b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486c1b541ff6adc6a3cdab348fbdd7fbcf1c202c1a474e5e5db37bbcfb38fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47800c35810b19642ee37e41f20900bef93d843d1d7e2219547dde7bf88cc7dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85684c09dd546fc9a972aa45da092b22d794a5588140a451becfbe962be82b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa5c186d3ea5f41540e7450b9ee5a5bbfe8359762ab9aa219a4e4bb26fb0a94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9717409dfdeb0d26d9a1c28a651d1feddfd4aaa11a9bc7db7206fbf8c6400c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fb183cca1d2f6f6f5bd9c80107e85fe0f5f39ea985a036ba115836e45d7cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9bb746c11b2a67e014815e16d6765a1c86a3d2c156cae83853881bdb988507c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fa78137ccc23e32f6eb838abf3991ca3d99b8349721f1419c892a7f8795f55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.345088 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a18f3e6327858658093c392b8b7de18ec6a1085c08ae31ae2f2dc371555011a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.371214 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.390410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.390462 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.390473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.390492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.390504 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.393290 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vws75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vws75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.407432 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d547b44-2e4d-4ca1-b1a2-e811ddaef9d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae58e072653f1ba9869c3d6a218332a184032d95378dfba6be1d0c683d6ddf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a626256ec689285dfaf54ef25e23d40bef94d10fec21b7eca529c49e527d5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a626256ec689285dfaf54ef25e23d40bef94d10fec21b7eca529c49e527d5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:51:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:51:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.428296 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa44405c-042c-485a-ab6c-912dcd377751\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:52:57Z\\\",\\\"message\\\":\\\"57 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0127 07:52:57.037280 6957 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI0127 07:52:57.037300 6957 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}\\\\nI0127 07:52:57.037235 6957 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}\\\\nI0127 07:52:57.037326 6957 services_controller.go:360] Finished syncing service cluster-autoscaler-operator on namespace openshift-machine-api for network=default : 4.740846ms\\\\nI0127 07:52:57.037332 6957 services_controller.go:360] Finished syncing service api on namespace openshift-oauth-apiserver for network=default : 1.998381ms\\\\nI0127 07:52:57.037373 6957 services_controller.go:356] Processing sync for service openshift-ingress-canary/ingress-canary for network=default\\\\nF0127 07:52:57.037046 6957 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e6002\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tj4tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:51:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7642m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.443274 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"643aaef9-e302-436b-943e-940480ef74fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:52:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda08199034af06aa5966ebd72a8a553fe0c83acf73bbd42f4f7d6bb121b2e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea42ae75a943bc89609995381bc82366333131d7c7200a3ab758b9de239e3283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:52:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8q989\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:52:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhss5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.494611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.494685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.494704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.494732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.494753 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.597824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.597875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.597893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.597912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.597925 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.700340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.700405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.700420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.700446 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.700462 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.804587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.804636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.804645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.804666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.804677 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.907506 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.907640 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.907660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.907685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:05 crc kubenswrapper[4787]: I0127 07:53:05.907705 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:05Z","lastTransitionTime":"2026-01-27T07:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.010771 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.010849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.010870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.010905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.010925 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.087528 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:26:47.194521815 +0000 UTC Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.114600 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.114670 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.114689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.114716 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.114736 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.218993 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.219049 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.219062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.219081 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.219097 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.322578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.322635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.322647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.322669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.322685 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.424977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.425030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.425042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.425066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.425104 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.528438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.528489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.528501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.528521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.528538 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.631600 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.631645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.631656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.631677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.631697 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.734524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.734655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.734672 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.734698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.734716 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.837025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.837094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.837109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.837130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.837141 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.939796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.939842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.939852 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.939866 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:06 crc kubenswrapper[4787]: I0127 07:53:06.939876 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:06Z","lastTransitionTime":"2026-01-27T07:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.043314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.043392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.043411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.043440 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.043463 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.076060 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.076214 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:07 crc kubenswrapper[4787]: E0127 07:53:07.076227 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.076536 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.076861 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:07 crc kubenswrapper[4787]: E0127 07:53:07.076837 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:07 crc kubenswrapper[4787]: E0127 07:53:07.077042 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:07 crc kubenswrapper[4787]: E0127 07:53:07.077268 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.088450 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:29:25.31940747 +0000 UTC Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.147362 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.147409 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.147418 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.147437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.147450 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.250677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.250763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.250781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.250814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.250834 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.354099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.354177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.354196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.354223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.354245 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.457386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.457457 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.457472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.457497 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.457512 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.560604 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.560699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.560717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.560742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.560759 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.663597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.663664 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.663680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.663704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.663726 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.766972 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.767032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.767054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.767076 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.767091 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.870657 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.870700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.870710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.870726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.870736 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.973522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.973614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.973637 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.973662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:07 crc kubenswrapper[4787]: I0127 07:53:07.973684 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:07Z","lastTransitionTime":"2026-01-27T07:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.077002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.077094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.077113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.077139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.077160 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.089579 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:37:11.807882283 +0000 UTC Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.182417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.182886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.182985 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.183110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.183238 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.286492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.286573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.286588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.286615 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.286627 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.389877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.390264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.390458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.390674 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.390810 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.494023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.494088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.494105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.494133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.494154 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.598117 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.598174 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.598187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.598206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.598217 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.702063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.702133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.702147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.702170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.702188 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.805677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.805740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.805753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.805775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.805792 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.909503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.909569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.909582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.909602 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:08 crc kubenswrapper[4787]: I0127 07:53:08.909617 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:08Z","lastTransitionTime":"2026-01-27T07:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.013284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.013355 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.013381 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.013417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.013444 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.077144 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.077203 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:09 crc kubenswrapper[4787]: E0127 07:53:09.077319 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.077430 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:09 crc kubenswrapper[4787]: E0127 07:53:09.077486 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.077613 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:09 crc kubenswrapper[4787]: E0127 07:53:09.077765 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:09 crc kubenswrapper[4787]: E0127 07:53:09.077876 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.089792 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:58:03.668382459 +0000 UTC Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.116442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.116577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.116611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.116641 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.116662 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.220185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.220242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.220253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.220275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.220289 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.323875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.323951 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.323969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.323998 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.324016 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.427527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.427611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.427625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.427671 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.427684 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.531125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.531192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.531207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.531230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.531247 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.634202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.634235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.634295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.634311 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.634321 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.737424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.737497 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.737513 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.737538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.737575 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.840231 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.840278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.840289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.840304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.840315 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.943083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.943172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.943191 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.943223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:09 crc kubenswrapper[4787]: I0127 07:53:09.943236 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:09Z","lastTransitionTime":"2026-01-27T07:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.045972 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.046251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.046359 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.046429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.046498 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.090483 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:41:49.259817597 +0000 UTC Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.150347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.150443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.150464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.150856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.151102 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.254488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.254619 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.254653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.254689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.254713 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.357870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.357911 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.357921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.357939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.357949 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.460394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.460502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.460527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.460624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.460651 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.564607 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.564982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.565066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.565153 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.565211 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.668720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.668878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.668913 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.668950 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.668973 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.772079 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.772115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.772125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.772142 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.772151 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.874886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.874978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.874997 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.875027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.875047 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.978989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.979080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.979105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.979140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:10 crc kubenswrapper[4787]: I0127 07:53:10.979169 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:10Z","lastTransitionTime":"2026-01-27T07:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.076224 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.076418 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:11 crc kubenswrapper[4787]: E0127 07:53:11.076564 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.076601 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:11 crc kubenswrapper[4787]: E0127 07:53:11.076823 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:11 crc kubenswrapper[4787]: E0127 07:53:11.077583 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.078632 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:11 crc kubenswrapper[4787]: E0127 07:53:11.079031 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.083082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.083455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.083684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.083956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.084267 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.091160 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:16:19.436378694 +0000 UTC Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.187533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.187582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.187593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.187607 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.187618 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.289803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.289845 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.289854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.289868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.289878 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.392612 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.392979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.393092 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.393269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.393464 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.497139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.497422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.497524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.497639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.497743 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.601292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.601372 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.601395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.601430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.601452 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.703888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.703979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.704006 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.704040 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.704065 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.807758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.808211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.808335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.808448 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.808579 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.911281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.911321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.911330 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.911346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:11 crc kubenswrapper[4787]: I0127 07:53:11.911357 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:11Z","lastTransitionTime":"2026-01-27T07:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.015008 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.015398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.015474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.015602 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.015689 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.092001 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:25:23.100468862 +0000 UTC Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.119114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.119365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.119524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.119814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.120038 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.228527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.228653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.228676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.228709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.228737 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.332247 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.332322 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.332344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.332398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.332418 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.436026 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.436075 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.436089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.436109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.436126 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.538473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.538530 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.538582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.538635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.538662 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.642151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.642250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.642288 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.642322 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.642344 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.745332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.745379 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.745387 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.745404 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.745414 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.848741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.848818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.848837 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.848865 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.848885 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.952781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.952863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.952899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.952949 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:12 crc kubenswrapper[4787]: I0127 07:53:12.952975 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:12Z","lastTransitionTime":"2026-01-27T07:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.055725 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.055777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.055785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.055803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.055815 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.076130 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.076203 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.076283 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.076451 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.076497 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.076545 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.076818 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.076908 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.078061 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.078218 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.092671 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:12:36.090549499 +0000 UTC Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.158788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.158824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.158833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.158850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.158863 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.262129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.262188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.262201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.262229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.262246 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.290027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.290102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.290114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.290133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.290147 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.306434 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.311988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.312262 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.312464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.312684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.312845 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.331845 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.337234 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.337577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.337708 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.337815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.337894 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.354444 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.358603 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.358728 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.358811 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.358892 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.358955 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.371708 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.377260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.377321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.377340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.377368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.377388 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.394072 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55ded10c-ad4c-443d-a571-c7a8f4a50b03\\\",\\\"systemUUID\\\":\\\"553a2493-323d-43ed-bfcf-44c5a8746c19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:53:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:53:13 crc kubenswrapper[4787]: E0127 07:53:13.394589 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.397488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.397686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.397915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.398159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.398373 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.501544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.501940 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.502033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.502110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.502173 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.604670 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.604723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.604735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.604754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.604765 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.707277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.707325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.707334 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.707351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.707362 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.810513 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.810590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.810605 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.810630 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.810648 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.913103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.913155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.913167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.913185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:13 crc kubenswrapper[4787]: I0127 07:53:13.913196 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:13Z","lastTransitionTime":"2026-01-27T07:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.015952 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.016004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.016017 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.016035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.016049 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.093196 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:30:05.548127765 +0000 UTC Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.119135 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.119181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.119192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.119208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.119220 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.223230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.223611 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.223712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.223826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.223915 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.327466 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.327528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.327568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.327593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.327613 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.430966 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.431025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.431039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.431064 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.431080 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.534776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.534846 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.534862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.534886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.534902 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.637466 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.637540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.637591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.637618 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.637638 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.740474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.740544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.740597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.740623 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.740642 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.842982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.843021 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.843030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.843045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.843056 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.945643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.945696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.945708 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.945727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:14 crc kubenswrapper[4787]: I0127 07:53:14.945742 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:14Z","lastTransitionTime":"2026-01-27T07:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.049692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.049750 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.049764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.049789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.049806 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.076544 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.076599 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.076760 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:15 crc kubenswrapper[4787]: E0127 07:53:15.076952 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.077012 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:15 crc kubenswrapper[4787]: E0127 07:53:15.077364 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:15 crc kubenswrapper[4787]: E0127 07:53:15.077236 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:15 crc kubenswrapper[4787]: E0127 07:53:15.077487 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.094163 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:14:52.050697245 +0000 UTC Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.118151 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fghc7" podStartSLOduration=77.118127865 podStartE2EDuration="1m17.118127865s" podCreationTimestamp="2026-01-27 07:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.107529942 +0000 UTC m=+100.759885434" watchObservedRunningTime="2026-01-27 07:53:15.118127865 +0000 UTC m=+100.770483357" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.138926 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=81.138889633 podStartE2EDuration="1m21.138889633s" podCreationTimestamp="2026-01-27 07:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.138800352 +0000 UTC m=+100.791155844" watchObservedRunningTime="2026-01-27 07:53:15.138889633 +0000 UTC m=+100.791245135" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.139349 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rqwfc" podStartSLOduration=77.139338152 podStartE2EDuration="1m17.139338152s" podCreationTimestamp="2026-01-27 07:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.118525983 +0000 UTC m=+100.770881515" watchObservedRunningTime="2026-01-27 07:53:15.139338152 +0000 UTC m=+100.791693654" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.153064 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.153100 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.153110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.153124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.153133 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.181469 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.18142693 podStartE2EDuration="1m20.18142693s" podCreationTimestamp="2026-01-27 07:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.164447948 +0000 UTC m=+100.816803440" watchObservedRunningTime="2026-01-27 07:53:15.18142693 +0000 UTC m=+100.833782472" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.220305 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rqjpz" podStartSLOduration=76.220263692 podStartE2EDuration="1m16.220263692s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.219117739 +0000 UTC m=+100.871473241" watchObservedRunningTime="2026-01-27 07:53:15.220263692 +0000 UTC m=+100.872619224" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.247340 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-fqb6r" podStartSLOduration=76.245261526 podStartE2EDuration="1m16.245261526s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.242868348 +0000 UTC m=+100.895223850" watchObservedRunningTime="2026-01-27 07:53:15.245261526 +0000 UTC m=+100.897617018" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.262980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.263055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.263069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.263094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.263113 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.292006 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.291979177 podStartE2EDuration="1m18.291979177s" podCreationTimestamp="2026-01-27 07:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.274023336 +0000 UTC m=+100.926378838" watchObservedRunningTime="2026-01-27 07:53:15.291979177 +0000 UTC m=+100.944334679" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.355187 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.355153649 podStartE2EDuration="15.355153649s" podCreationTimestamp="2026-01-27 07:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.328958332 +0000 UTC m=+100.981313824" watchObservedRunningTime="2026-01-27 07:53:15.355153649 +0000 UTC m=+101.007509141" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.365486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.365564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.365580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.365609 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.365625 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.370200 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhss5" podStartSLOduration=76.370173022 podStartE2EDuration="1m16.370173022s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.369036259 +0000 UTC m=+101.021391761" watchObservedRunningTime="2026-01-27 07:53:15.370173022 +0000 UTC m=+101.022528514" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.383987 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podStartSLOduration=76.38396656 podStartE2EDuration="1m16.38396656s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.382263426 +0000 UTC m=+101.034618938" watchObservedRunningTime="2026-01-27 07:53:15.38396656 +0000 UTC m=+101.036322052" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.397000 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.396968041 podStartE2EDuration="48.396968041s" podCreationTimestamp="2026-01-27 07:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:15.396260727 +0000 UTC m=+101.048616219" watchObservedRunningTime="2026-01-27 07:53:15.396968041 +0000 UTC m=+101.049323553" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.467463 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.467514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.467529 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.467562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.467576 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.570879 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.570953 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.570967 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.570987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.571001 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.673494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.673613 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.673629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.673647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.673657 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.775890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.775954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.775968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.775985 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.775998 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.878847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.878898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.878908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.878929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.878942 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.982125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.982177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.982187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.982205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:15 crc kubenswrapper[4787]: I0127 07:53:15.982216 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:15Z","lastTransitionTime":"2026-01-27T07:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.085618 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.085690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.085720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.085782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.085807 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.095188 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:09:21.540851714 +0000 UTC Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.188829 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.188895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.188914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.188943 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.188963 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.292979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.293111 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.293136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.293168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.293193 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.396353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.396394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.396404 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.396423 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.396434 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.500667 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.500715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.500727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.500748 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.500760 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.603536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.603648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.603676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.603709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.603733 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.707687 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.707753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.707771 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.707796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.707815 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.811686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.811745 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.811763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.811791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.811811 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.914871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.914925 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.914937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.914956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:16 crc kubenswrapper[4787]: I0127 07:53:16.914969 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:16Z","lastTransitionTime":"2026-01-27T07:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.017702 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.017762 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.017773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.017795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.017813 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.076391 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.076570 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:17 crc kubenswrapper[4787]: E0127 07:53:17.076858 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:17 crc kubenswrapper[4787]: E0127 07:53:17.076976 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.076604 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.077086 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:17 crc kubenswrapper[4787]: E0127 07:53:17.077330 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:17 crc kubenswrapper[4787]: E0127 07:53:17.077596 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.095435 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:53:21.509390053 +0000 UTC Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.121305 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.121370 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.121397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.121430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.121451 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.224789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.224828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.224837 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.224855 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.224876 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.328517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.328581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.328592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.328610 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.328624 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.433145 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.433204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.433223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.433251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.433270 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.536202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.536243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.536253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.536272 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.536285 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.640986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.641078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.641105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.641210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.641236 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.684160 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:17 crc kubenswrapper[4787]: E0127 07:53:17.684485 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:53:17 crc kubenswrapper[4787]: E0127 07:53:17.684754 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs podName:3969f21f-ab36-49b4-9a9c-02cf19e65ad0 nodeName:}" failed. No retries permitted until 2026-01-27 07:54:21.68473377 +0000 UTC m=+167.337089262 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs") pod "network-metrics-daemon-vws75" (UID: "3969f21f-ab36-49b4-9a9c-02cf19e65ad0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.743823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.743889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.743906 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.743937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.743956 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.847898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.847976 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.847993 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.848022 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.848039 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.950833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.950896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.950916 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.950942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:17 crc kubenswrapper[4787]: I0127 07:53:17.950962 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:17Z","lastTransitionTime":"2026-01-27T07:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.054492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.054562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.054581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.054601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.054615 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.096333 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:29:32.639292684 +0000 UTC Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.158060 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.158125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.158144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.158169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.158188 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.261061 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.261118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.261130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.261148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.261161 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.365186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.365265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.365289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.365320 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.365339 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.468927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.469496 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.469749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.469940 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.470084 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.573941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.574091 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.574109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.574138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.574158 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.678093 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.678175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.678194 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.678224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.678243 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.781878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.781942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.781957 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.781981 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.781997 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.885991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.886366 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.886490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.886664 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.886774 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.989901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.989955 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.989968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.989994 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:18 crc kubenswrapper[4787]: I0127 07:53:18.990008 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:18Z","lastTransitionTime":"2026-01-27T07:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.076668 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.076668 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.076707 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.076738 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:19 crc kubenswrapper[4787]: E0127 07:53:19.077940 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:19 crc kubenswrapper[4787]: E0127 07:53:19.078045 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:19 crc kubenswrapper[4787]: E0127 07:53:19.078140 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:19 crc kubenswrapper[4787]: E0127 07:53:19.078255 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.091890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.092158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.092235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.092319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.092397 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.097153 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:03:49.025622369 +0000 UTC Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.195029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.195351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.195421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.195486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.195593 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.298614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.298675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.298692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.298718 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.298736 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.402382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.402444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.402458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.402481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.402494 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.506354 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.506434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.506453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.506481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.506501 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.610456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.610522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.610539 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.610590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.610619 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.713966 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.714021 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.714031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.714047 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.714059 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.816952 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.817032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.817051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.817078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.817099 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.919590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.919645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.919656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.919680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:19 crc kubenswrapper[4787]: I0127 07:53:19.919694 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:19Z","lastTransitionTime":"2026-01-27T07:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.023102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.023164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.023178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.023200 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.023214 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.097972 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:31:54.511500745 +0000 UTC Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.127093 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.127169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.127185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.127210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.127231 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.230626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.230694 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.230713 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.230753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.230775 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.334274 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.334370 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.334433 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.334472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.334493 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.437421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.437488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.437509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.437535 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.437583 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.540244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.540283 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.540292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.540307 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.540317 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.643812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.643940 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.643979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.644014 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.644034 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.747675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.747752 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.747777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.747809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.747829 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.850885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.850942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.850956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.850976 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.850991 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.953103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.953139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.953150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.953163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:20 crc kubenswrapper[4787]: I0127 07:53:20.953173 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:20Z","lastTransitionTime":"2026-01-27T07:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.056208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.056239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.056247 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.056260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.056269 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.076911 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.077018 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.077048 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:21 crc kubenswrapper[4787]: E0127 07:53:21.077069 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.077115 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:21 crc kubenswrapper[4787]: E0127 07:53:21.077150 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:21 crc kubenswrapper[4787]: E0127 07:53:21.077300 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:21 crc kubenswrapper[4787]: E0127 07:53:21.077341 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.099127 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:40:47.543571915 +0000 UTC Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.158519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.158583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.158594 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.158609 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.158621 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.262118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.262183 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.262201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.262301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.262333 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.366276 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.366367 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.366385 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.366409 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.366426 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.469252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.469669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.469809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.470046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.470202 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.573703 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.574264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.574468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.574723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.574910 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.679029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.679592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.679860 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.680149 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.680345 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.783845 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.783907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.783923 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.783946 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.783962 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.887935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.887988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.888000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.888024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.888037 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.992162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.992217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.992228 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.992248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:21 crc kubenswrapper[4787]: I0127 07:53:21.992261 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:21Z","lastTransitionTime":"2026-01-27T07:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.095672 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.095738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.095753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.095777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.095791 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.100893 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:12:16.778250298 +0000 UTC Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.199083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.199118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.199127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.199142 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.199153 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.302666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.302723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.302736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.302761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.302782 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.406236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.406300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.406312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.406332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.406347 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.510419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.510498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.510517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.510545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.510604 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.614597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.615025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.615136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.615255 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.615347 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.718116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.718495 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.718639 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.718748 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.718844 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.822503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.822894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.823076 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.823256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.823448 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.926979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.927055 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.927071 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.927097 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:22 crc kubenswrapper[4787]: I0127 07:53:22.927117 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:22Z","lastTransitionTime":"2026-01-27T07:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.030374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.030449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.030475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.030511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.030539 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:23Z","lastTransitionTime":"2026-01-27T07:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.076097 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.076180 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.076268 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:23 crc kubenswrapper[4787]: E0127 07:53:23.076348 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:23 crc kubenswrapper[4787]: E0127 07:53:23.076598 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:23 crc kubenswrapper[4787]: E0127 07:53:23.076752 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.076887 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:23 crc kubenswrapper[4787]: E0127 07:53:23.077135 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.101604 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:13:41.879854419 +0000 UTC Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.133944 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.134001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.134015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.134035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.134048 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:23Z","lastTransitionTime":"2026-01-27T07:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.236481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.236543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.236587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.236609 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.236623 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:23Z","lastTransitionTime":"2026-01-27T07:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.339603 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.339650 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.339662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.339681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.339693 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:23Z","lastTransitionTime":"2026-01-27T07:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.443355 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.443421 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.443440 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.443480 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.443497 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:23Z","lastTransitionTime":"2026-01-27T07:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.515411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.515475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.515493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.515522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.515543 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:53:23Z","lastTransitionTime":"2026-01-27T07:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.572868 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54"] Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.573239 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.575617 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.576519 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.576843 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.579533 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.661076 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.661141 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.661184 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.661405 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.661749 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.763048 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.763214 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.763284 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.763337 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.763391 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.763544 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.763383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.767949 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.776532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.783598 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ksn54\" (UID: \"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:23 crc kubenswrapper[4787]: I0127 07:53:23.894334 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" Jan 27 07:53:24 crc kubenswrapper[4787]: I0127 07:53:24.102502 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:26:03.376828637 +0000 UTC Jan 27 07:53:24 crc kubenswrapper[4787]: I0127 07:53:24.102612 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 07:53:24 crc kubenswrapper[4787]: I0127 07:53:24.115630 4787 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 07:53:24 crc kubenswrapper[4787]: I0127 07:53:24.762355 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" event={"ID":"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c","Type":"ContainerStarted","Data":"dde55c3e68a9cf5b5b88445a04afccf06d2c0be58fb3d4a1dc8e00a181071723"} Jan 27 07:53:24 crc kubenswrapper[4787]: I0127 07:53:24.762410 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" event={"ID":"a6bbe7ce-5c2a-4d7a-84c0-80b3ae93da6c","Type":"ContainerStarted","Data":"56585961b600055370ac2c1c42d09bac6da7f2b1ae090d64881bbf99faaadff1"} Jan 27 07:53:24 crc kubenswrapper[4787]: I0127 07:53:24.781275 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ksn54" podStartSLOduration=85.781242313 podStartE2EDuration="1m25.781242313s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:24.780514468 +0000 UTC m=+110.432870040" watchObservedRunningTime="2026-01-27 07:53:24.781242313 +0000 UTC m=+110.433597865" Jan 27 07:53:25 crc kubenswrapper[4787]: I0127 07:53:25.076837 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:25 crc kubenswrapper[4787]: I0127 07:53:25.076893 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:25 crc kubenswrapper[4787]: I0127 07:53:25.077050 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:25 crc kubenswrapper[4787]: I0127 07:53:25.079842 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:25 crc kubenswrapper[4787]: E0127 07:53:25.080041 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:25 crc kubenswrapper[4787]: E0127 07:53:25.080133 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:25 crc kubenswrapper[4787]: E0127 07:53:25.080233 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:25 crc kubenswrapper[4787]: E0127 07:53:25.080388 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:26 crc kubenswrapper[4787]: I0127 07:53:26.077354 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 07:53:26 crc kubenswrapper[4787]: E0127 07:53:26.077632 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7642m_openshift-ovn-kubernetes(fa44405c-042c-485a-ab6c-912dcd377751)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" Jan 27 07:53:27 crc kubenswrapper[4787]: I0127 07:53:27.076374 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:27 crc kubenswrapper[4787]: E0127 07:53:27.076633 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:27 crc kubenswrapper[4787]: I0127 07:53:27.076938 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:27 crc kubenswrapper[4787]: I0127 07:53:27.077005 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:27 crc kubenswrapper[4787]: I0127 07:53:27.076974 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:27 crc kubenswrapper[4787]: E0127 07:53:27.077138 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:27 crc kubenswrapper[4787]: E0127 07:53:27.077297 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:27 crc kubenswrapper[4787]: E0127 07:53:27.077523 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:29 crc kubenswrapper[4787]: I0127 07:53:29.075858 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:29 crc kubenswrapper[4787]: I0127 07:53:29.075908 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:29 crc kubenswrapper[4787]: I0127 07:53:29.075954 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:29 crc kubenswrapper[4787]: E0127 07:53:29.076032 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:29 crc kubenswrapper[4787]: E0127 07:53:29.076198 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:29 crc kubenswrapper[4787]: E0127 07:53:29.076257 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:29 crc kubenswrapper[4787]: I0127 07:53:29.075883 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:29 crc kubenswrapper[4787]: E0127 07:53:29.077340 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:31 crc kubenswrapper[4787]: I0127 07:53:31.076374 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:31 crc kubenswrapper[4787]: E0127 07:53:31.076519 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:31 crc kubenswrapper[4787]: I0127 07:53:31.076534 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:31 crc kubenswrapper[4787]: E0127 07:53:31.076692 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:31 crc kubenswrapper[4787]: I0127 07:53:31.076374 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:31 crc kubenswrapper[4787]: E0127 07:53:31.076783 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:31 crc kubenswrapper[4787]: I0127 07:53:31.077335 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:31 crc kubenswrapper[4787]: E0127 07:53:31.077539 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.075769 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.075940 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.076096 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:33 crc kubenswrapper[4787]: E0127 07:53:33.075963 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:33 crc kubenswrapper[4787]: E0127 07:53:33.076161 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:33 crc kubenswrapper[4787]: E0127 07:53:33.076295 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.076521 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:33 crc kubenswrapper[4787]: E0127 07:53:33.076644 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.794835 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/1.log" Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.795945 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/0.log" Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.796193 4787 generic.go:334] "Generic (PLEG): container finished" podID="e6f78168-0b0d-464d-b1c7-00bb9a69c0d1" containerID="4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169" exitCode=1 Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.796294 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqjpz" event={"ID":"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1","Type":"ContainerDied","Data":"4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169"} Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.796633 4787 scope.go:117] "RemoveContainer" containerID="e73805a73ffd2c2028fa682daaf20e295ecd2f2d7e24bb9b897883f18c3c514e" Jan 27 07:53:33 crc kubenswrapper[4787]: I0127 07:53:33.797332 4787 scope.go:117] "RemoveContainer" containerID="4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169" Jan 27 07:53:33 crc kubenswrapper[4787]: E0127 07:53:33.797697 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rqjpz_openshift-multus(e6f78168-0b0d-464d-b1c7-00bb9a69c0d1)\"" pod="openshift-multus/multus-rqjpz" podUID="e6f78168-0b0d-464d-b1c7-00bb9a69c0d1" Jan 27 07:53:34 crc kubenswrapper[4787]: I0127 07:53:34.801259 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/1.log" Jan 27 07:53:35 crc kubenswrapper[4787]: E0127 07:53:35.032970 4787 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 07:53:35 crc kubenswrapper[4787]: I0127 07:53:35.076293 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:35 crc kubenswrapper[4787]: I0127 07:53:35.076367 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:35 crc kubenswrapper[4787]: I0127 07:53:35.076884 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:35 crc kubenswrapper[4787]: E0127 07:53:35.076878 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:35 crc kubenswrapper[4787]: I0127 07:53:35.076976 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:35 crc kubenswrapper[4787]: E0127 07:53:35.077154 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:35 crc kubenswrapper[4787]: E0127 07:53:35.077405 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:35 crc kubenswrapper[4787]: E0127 07:53:35.077673 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:35 crc kubenswrapper[4787]: E0127 07:53:35.201598 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 07:53:37 crc kubenswrapper[4787]: I0127 07:53:37.076823 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:37 crc kubenswrapper[4787]: E0127 07:53:37.076998 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:37 crc kubenswrapper[4787]: I0127 07:53:37.077109 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:37 crc kubenswrapper[4787]: I0127 07:53:37.077198 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:37 crc kubenswrapper[4787]: E0127 07:53:37.077341 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:37 crc kubenswrapper[4787]: E0127 07:53:37.077612 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:37 crc kubenswrapper[4787]: I0127 07:53:37.077211 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:37 crc kubenswrapper[4787]: E0127 07:53:37.077892 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:37 crc kubenswrapper[4787]: I0127 07:53:37.080126 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 07:53:37 crc kubenswrapper[4787]: I0127 07:53:37.814468 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/3.log" Jan 27 07:53:37 crc kubenswrapper[4787]: I0127 07:53:37.818159 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerStarted","Data":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} Jan 27 07:53:37 crc kubenswrapper[4787]: I0127 07:53:37.818899 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:53:38 crc kubenswrapper[4787]: I0127 07:53:38.152718 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podStartSLOduration=99.152693543 podStartE2EDuration="1m39.152693543s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:37.847894303 +0000 UTC m=+123.500249805" watchObservedRunningTime="2026-01-27 07:53:38.152693543 +0000 UTC m=+123.805049045" Jan 27 07:53:38 crc kubenswrapper[4787]: I0127 07:53:38.153274 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vws75"] Jan 27 07:53:38 crc kubenswrapper[4787]: I0127 07:53:38.153400 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:38 crc kubenswrapper[4787]: E0127 07:53:38.153528 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:39 crc kubenswrapper[4787]: I0127 07:53:39.075959 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:39 crc kubenswrapper[4787]: E0127 07:53:39.076633 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:39 crc kubenswrapper[4787]: I0127 07:53:39.076948 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:39 crc kubenswrapper[4787]: E0127 07:53:39.077064 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:39 crc kubenswrapper[4787]: I0127 07:53:39.077270 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:39 crc kubenswrapper[4787]: E0127 07:53:39.077369 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:40 crc kubenswrapper[4787]: I0127 07:53:40.076723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:40 crc kubenswrapper[4787]: E0127 07:53:40.078325 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:40 crc kubenswrapper[4787]: E0127 07:53:40.202624 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 07:53:41 crc kubenswrapper[4787]: I0127 07:53:41.076616 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:41 crc kubenswrapper[4787]: I0127 07:53:41.076738 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:41 crc kubenswrapper[4787]: I0127 07:53:41.076977 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:41 crc kubenswrapper[4787]: E0127 07:53:41.077171 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:41 crc kubenswrapper[4787]: E0127 07:53:41.077310 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:41 crc kubenswrapper[4787]: E0127 07:53:41.077502 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:42 crc kubenswrapper[4787]: I0127 07:53:42.076061 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:42 crc kubenswrapper[4787]: E0127 07:53:42.076317 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:43 crc kubenswrapper[4787]: I0127 07:53:43.076068 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:43 crc kubenswrapper[4787]: I0127 07:53:43.076172 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:43 crc kubenswrapper[4787]: E0127 07:53:43.076243 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:43 crc kubenswrapper[4787]: I0127 07:53:43.076087 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:43 crc kubenswrapper[4787]: E0127 07:53:43.076459 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:43 crc kubenswrapper[4787]: E0127 07:53:43.076448 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:43 crc kubenswrapper[4787]: I0127 07:53:43.322118 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 07:53:44 crc kubenswrapper[4787]: I0127 07:53:44.076483 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:44 crc kubenswrapper[4787]: E0127 07:53:44.076730 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:45 crc kubenswrapper[4787]: I0127 07:53:45.075981 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:45 crc kubenswrapper[4787]: I0127 07:53:45.076072 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:45 crc kubenswrapper[4787]: I0127 07:53:45.077769 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:45 crc kubenswrapper[4787]: E0127 07:53:45.077758 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:45 crc kubenswrapper[4787]: E0127 07:53:45.077955 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:45 crc kubenswrapper[4787]: E0127 07:53:45.078145 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:45 crc kubenswrapper[4787]: E0127 07:53:45.203507 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 07:53:46 crc kubenswrapper[4787]: I0127 07:53:46.075753 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:46 crc kubenswrapper[4787]: E0127 07:53:46.075964 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:46 crc kubenswrapper[4787]: I0127 07:53:46.077675 4787 scope.go:117] "RemoveContainer" containerID="4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169" Jan 27 07:53:46 crc kubenswrapper[4787]: I0127 07:53:46.861137 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/1.log" Jan 27 07:53:46 crc kubenswrapper[4787]: I0127 07:53:46.861518 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqjpz" event={"ID":"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1","Type":"ContainerStarted","Data":"30eb607bd3c5a74648f4c24cbbf8118159296bb63fb30a76b991d8fdb94cb16a"} Jan 27 07:53:47 crc kubenswrapper[4787]: I0127 07:53:47.075761 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:47 crc kubenswrapper[4787]: I0127 07:53:47.075773 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:47 crc kubenswrapper[4787]: I0127 07:53:47.076035 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:47 crc kubenswrapper[4787]: E0127 07:53:47.075902 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:47 crc kubenswrapper[4787]: E0127 07:53:47.076178 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:47 crc kubenswrapper[4787]: E0127 07:53:47.076222 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:48 crc kubenswrapper[4787]: I0127 07:53:48.075876 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:48 crc kubenswrapper[4787]: E0127 07:53:48.076132 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:49 crc kubenswrapper[4787]: I0127 07:53:49.076603 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:49 crc kubenswrapper[4787]: I0127 07:53:49.076652 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:49 crc kubenswrapper[4787]: I0127 07:53:49.077084 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:49 crc kubenswrapper[4787]: E0127 07:53:49.077344 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:53:49 crc kubenswrapper[4787]: E0127 07:53:49.077800 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:53:49 crc kubenswrapper[4787]: E0127 07:53:49.077998 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:53:50 crc kubenswrapper[4787]: I0127 07:53:50.076211 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:50 crc kubenswrapper[4787]: E0127 07:53:50.076379 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vws75" podUID="3969f21f-ab36-49b4-9a9c-02cf19e65ad0" Jan 27 07:53:51 crc kubenswrapper[4787]: I0127 07:53:51.076640 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:53:51 crc kubenswrapper[4787]: I0127 07:53:51.076640 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:53:51 crc kubenswrapper[4787]: I0127 07:53:51.076664 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:53:51 crc kubenswrapper[4787]: I0127 07:53:51.078952 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 07:53:51 crc kubenswrapper[4787]: I0127 07:53:51.079287 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 07:53:51 crc kubenswrapper[4787]: I0127 07:53:51.079344 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 07:53:51 crc kubenswrapper[4787]: I0127 07:53:51.079383 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 07:53:52 crc kubenswrapper[4787]: I0127 07:53:52.076024 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:53:52 crc kubenswrapper[4787]: I0127 07:53:52.078807 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 07:53:52 crc kubenswrapper[4787]: I0127 07:53:52.080200 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.417607 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.473375 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.474531 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.478956 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.479126 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.481205 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zkx8b"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.482493 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.483002 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.483614 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.483941 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.484543 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.485538 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.487322 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.487911 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.488831 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.489186 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.489685 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.490027 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.491462 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.491857 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.492201 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.492500 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.492531 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.492576 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.493161 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.493180 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.493953 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.494160 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.494352 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.498429 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.499497 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.500334 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qptnb"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.500922 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.501951 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-84k56"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.502543 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.502908 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.503112 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.503391 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.504002 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.504227 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.505465 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xwx4w"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.506910 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.507087 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.507167 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.507277 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.507372 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.507590 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.507628 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.507757 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.507848 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508004 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508105 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508166 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508259 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508678 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508726 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508817 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508837 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508923 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.508994 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.509042 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.509401 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.509591 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.511287 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wdppr"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.511962 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ctv89"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.512195 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xwx4w" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.512396 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.512652 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.512788 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.512905 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.528082 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.532237 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.533497 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.534464 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.550848 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.551332 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g44f6"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.551704 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qtkc8"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.552175 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.552268 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.553219 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.553465 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.553471 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.554818 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556683 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-audit-policies\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556713 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556747 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-audit-dir\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556805 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-serving-cert\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556841 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-etcd-client\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556861 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556899 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wk4\" (UniqueName: \"kubernetes.io/projected/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-kube-api-access-b4wk4\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556921 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-encryption-config\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.556697 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.557193 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.557280 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.557290 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.557348 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.557410 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.558887 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.559077 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.561541 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.561713 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.562034 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.565223 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.566117 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.571823 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.572372 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g44f6"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.573752 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.573947 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.576110 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.577058 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.578392 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.578911 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.578945 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.579400 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.579464 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.580271 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.580289 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.581124 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.582939 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-84k56"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.587219 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.587857 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.587989 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.588191 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.588314 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.588415 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.588510 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.588634 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.588748 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.589263 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.589649 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.589840 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.590043 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.592241 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zkx8b"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.594171 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.596769 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.596907 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.597084 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.616997 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.617395 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xwx4w"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.622072 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.622839 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.626130 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.626927 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.627779 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.643175 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.646297 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.647658 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mnzjr"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.648344 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t6xzm"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.648853 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.649182 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.649326 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.650256 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.650535 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d44bj"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.651490 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.651851 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.653445 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.653600 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.653750 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.655241 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.658860 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ef9e285-88a8-499d-8fb8-e4c882336e68-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661649 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/832859dc-03b3-4b2b-9a40-75372ebb38d9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661686 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-client-ca\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661719 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661746 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-config\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661769 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qpv\" (UniqueName: \"kubernetes.io/projected/a1672707-9776-4274-9bdd-4f1dabf83038-kube-api-access-s5qpv\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661803 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0a3473c0-d321-4be8-860f-bd51210cb58f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661845 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661867 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-serving-cert\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661889 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-audit\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.661911 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662021 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-config\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662084 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-service-ca\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662185 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfw62\" (UniqueName: \"kubernetes.io/projected/7c9507a4-925b-418e-b824-f338cd69a66e-kube-api-access-nfw62\") pod \"cluster-samples-operator-665b6dd947-q44w4\" (UID: \"7c9507a4-925b-418e-b824-f338cd69a66e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662278 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-audit-dir\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662320 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmsq\" (UniqueName: \"kubernetes.io/projected/ca4c081d-896c-4cc8-9656-59364376de35-kube-api-access-kxmsq\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662319 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662396 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662423 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-encryption-config\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662451 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662481 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-trusted-ca-bundle\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662541 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-client-ca\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba0745e4-7610-44e2-9a65-cd3875393d64-node-pullsecrets\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662621 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662680 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stktz\" (UniqueName: \"kubernetes.io/projected/0259775f-1fef-486a-bc17-4638e38ed83f-kube-api-access-stktz\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662834 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662906 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-audit-dir\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662935 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-serving-cert\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.662961 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7r6d\" (UniqueName: \"kubernetes.io/projected/ba0745e4-7610-44e2-9a65-cd3875393d64-kube-api-access-m7r6d\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663006 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663005 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ef9e285-88a8-499d-8fb8-e4c882336e68-images\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-serving-cert\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663386 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-config\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663413 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-config\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663436 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c54z8\" (UniqueName: \"kubernetes.io/projected/fcc64739-2fd2-4413-a19e-0bc14dd883d6-kube-api-access-c54z8\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663459 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663509 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjfh\" (UniqueName: \"kubernetes.io/projected/832859dc-03b3-4b2b-9a40-75372ebb38d9-kube-api-access-dfjfh\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663541 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663584 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp227\" (UniqueName: \"kubernetes.io/projected/0a3473c0-d321-4be8-860f-bd51210cb58f-kube-api-access-tp227\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663607 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663624 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4c081d-896c-4cc8-9656-59364376de35-serving-cert\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-config\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663657 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663708 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4wk4\" (UniqueName: \"kubernetes.io/projected/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-kube-api-access-b4wk4\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663730 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/287286a8-80b2-4c95-948d-a096153d8e51-machine-approver-tls\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663753 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-encryption-config\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663771 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-etcd-serving-ca\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663791 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-dir\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663809 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkkn\" (UniqueName: \"kubernetes.io/projected/daf6a704-5857-419a-bfaa-e2af3a05d386-kube-api-access-mjkkn\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663851 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-oauth-config\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663873 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-oauth-serving-cert\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663899 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9vf\" (UniqueName: \"kubernetes.io/projected/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-kube-api-access-5k9vf\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663916 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a3473c0-d321-4be8-860f-bd51210cb58f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.663935 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvmrj\" (UniqueName: \"kubernetes.io/projected/287286a8-80b2-4c95-948d-a096153d8e51-kube-api-access-mvmrj\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664171 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-audit-policies\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664276 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664296 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf6a704-5857-419a-bfaa-e2af3a05d386-serving-cert\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664316 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-serving-cert\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664339 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664357 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v9n9\" (UniqueName: \"kubernetes.io/projected/301e3f1a-19c5-47a7-b85d-81676098f971-kube-api-access-5v9n9\") pod \"downloads-7954f5f757-xwx4w\" (UID: \"301e3f1a-19c5-47a7-b85d-81676098f971\") " pod="openshift-console/downloads-7954f5f757-xwx4w" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664373 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-service-ca-bundle\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664422 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/832859dc-03b3-4b2b-9a40-75372ebb38d9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664447 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1672707-9776-4274-9bdd-4f1dabf83038-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664473 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664491 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0259775f-1fef-486a-bc17-4638e38ed83f-serving-cert\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664533 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.664592 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-etcd-client\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665075 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1672707-9776-4274-9bdd-4f1dabf83038-serving-cert\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665131 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-trusted-ca\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665151 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cbr9\" (UniqueName: \"kubernetes.io/projected/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-kube-api-access-9cbr9\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665200 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665216 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665235 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-config\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665255 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef9e285-88a8-499d-8fb8-e4c882336e68-config\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665273 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/287286a8-80b2-4c95-948d-a096153d8e51-auth-proxy-config\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665295 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-image-import-ca\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665314 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-policies\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665331 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/287286a8-80b2-4c95-948d-a096153d8e51-config\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665356 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c9507a4-925b-418e-b824-f338cd69a66e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q44w4\" (UID: \"7c9507a4-925b-418e-b824-f338cd69a66e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665379 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgzr\" (UniqueName: \"kubernetes.io/projected/3ef9e285-88a8-499d-8fb8-e4c882336e68-kube-api-access-7kgzr\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665400 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-etcd-client\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665416 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba0745e4-7610-44e2-9a65-cd3875393d64-audit-dir\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a3473c0-d321-4be8-860f-bd51210cb58f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665597 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.665958 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.666197 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-audit-policies\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.666324 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.666344 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.666475 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.666570 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.668480 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.669019 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.671743 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.672292 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.673835 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qptnb"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.675936 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-encryption-config\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.676080 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.676376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-etcd-client\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.676812 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.677661 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.678613 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.678963 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.679420 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.680364 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.683179 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.684079 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.686167 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-serving-cert\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.686559 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wdppr"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.688169 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ctv89"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.689759 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.690780 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.691104 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.691263 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.691397 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.692174 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.693642 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.698029 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.698861 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.700154 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.700419 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5d4g"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.701090 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.701398 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9xlcr"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.702228 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.705758 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.708689 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.713361 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qtkc8"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.713529 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jzdds"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.718869 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.737141 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mnzjr"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.737214 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tx6fd"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.738326 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t6xzm"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.738350 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-crvsg"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.738940 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.739226 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.740253 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.740332 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.741913 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.742814 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.753099 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d44bj"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.753170 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.753756 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.755432 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.756793 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.757143 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rkzlx"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.758116 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.758499 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.759623 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.761216 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.762329 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.763614 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.764743 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/287286a8-80b2-4c95-948d-a096153d8e51-config\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766286 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c9507a4-925b-418e-b824-f338cd69a66e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q44w4\" (UID: \"7c9507a4-925b-418e-b824-f338cd69a66e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766363 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-image-import-ca\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766453 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-policies\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766573 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgzr\" (UniqueName: \"kubernetes.io/projected/3ef9e285-88a8-499d-8fb8-e4c882336e68-kube-api-access-7kgzr\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766701 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-client\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766791 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba0745e4-7610-44e2-9a65-cd3875393d64-audit-dir\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766889 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a3473c0-d321-4be8-860f-bd51210cb58f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766982 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f063d15e-fe05-46d3-8304-72da67594ea6-serving-cert\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767091 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-metrics-tls\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767193 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ef9e285-88a8-499d-8fb8-e4c882336e68-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767276 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85r5\" (UniqueName: \"kubernetes.io/projected/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-kube-api-access-l85r5\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767360 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/832859dc-03b3-4b2b-9a40-75372ebb38d9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767430 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-client-ca\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767518 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-ca\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767627 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c622447-70d1-4b27-b09c-6fa6402f632c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4xj5z\" (UID: \"4c622447-70d1-4b27-b09c-6fa6402f632c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qpv\" (UniqueName: \"kubernetes.io/projected/a1672707-9776-4274-9bdd-4f1dabf83038-kube-api-access-s5qpv\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-trusted-ca\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767956 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.768056 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-config\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.768164 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0a3473c0-d321-4be8-860f-bd51210cb58f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.768262 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-proxy-tls\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.768361 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-serving-cert\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.768470 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.768695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-config\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770171 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-service-ca\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770250 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba0745e4-7610-44e2-9a65-cd3875393d64-audit-dir\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770263 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-audit\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770348 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfw62\" (UniqueName: \"kubernetes.io/projected/7c9507a4-925b-418e-b824-f338cd69a66e-kube-api-access-nfw62\") pod \"cluster-samples-operator-665b6dd947-q44w4\" (UID: \"7c9507a4-925b-418e-b824-f338cd69a66e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770388 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-encryption-config\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770421 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770451 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-trusted-ca-bundle\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770483 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770520 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmsq\" (UniqueName: \"kubernetes.io/projected/ca4c081d-896c-4cc8-9656-59364376de35-kube-api-access-kxmsq\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770583 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770613 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-client-ca\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770660 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba0745e4-7610-44e2-9a65-cd3875393d64-node-pullsecrets\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770693 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770720 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stktz\" (UniqueName: \"kubernetes.io/projected/0259775f-1fef-486a-bc17-4638e38ed83f-kube-api-access-stktz\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770771 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7r6d\" (UniqueName: \"kubernetes.io/projected/ba0745e4-7610-44e2-9a65-cd3875393d64-kube-api-access-m7r6d\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770799 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ef9e285-88a8-499d-8fb8-e4c882336e68-images\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770843 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-serving-cert\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770876 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-config\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770910 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsj6\" (UniqueName: \"kubernetes.io/projected/c71976d9-f7ec-4258-8bf5-08a526607fd9-kube-api-access-vvsj6\") pod \"control-plane-machine-set-operator-78cbb6b69f-wdvfw\" (UID: \"c71976d9-f7ec-4258-8bf5-08a526607fd9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770938 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c54z8\" (UniqueName: \"kubernetes.io/projected/fcc64739-2fd2-4413-a19e-0bc14dd883d6-kube-api-access-c54z8\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.770966 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771046 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjfh\" (UniqueName: \"kubernetes.io/projected/832859dc-03b3-4b2b-9a40-75372ebb38d9-kube-api-access-dfjfh\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771080 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771118 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-config\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771151 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771184 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp227\" (UniqueName: \"kubernetes.io/projected/0a3473c0-d321-4be8-860f-bd51210cb58f-kube-api-access-tp227\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771217 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4c081d-896c-4cc8-9656-59364376de35-serving-cert\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771246 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-config\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771273 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771310 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/287286a8-80b2-4c95-948d-a096153d8e51-machine-approver-tls\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771382 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-etcd-serving-ca\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771414 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-dir\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkkn\" (UniqueName: \"kubernetes.io/projected/daf6a704-5857-419a-bfaa-e2af3a05d386-kube-api-access-mjkkn\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771470 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-oauth-config\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771507 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-oauth-serving-cert\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771539 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-service-ca\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771587 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c71976d9-f7ec-4258-8bf5-08a526607fd9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wdvfw\" (UID: \"c71976d9-f7ec-4258-8bf5-08a526607fd9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771620 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k9vf\" (UniqueName: \"kubernetes.io/projected/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-kube-api-access-5k9vf\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771650 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a3473c0-d321-4be8-860f-bd51210cb58f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771679 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvmrj\" (UniqueName: \"kubernetes.io/projected/287286a8-80b2-4c95-948d-a096153d8e51-kube-api-access-mvmrj\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771726 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9rhv\" (UniqueName: \"kubernetes.io/projected/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-kube-api-access-t9rhv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771759 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771788 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf6a704-5857-419a-bfaa-e2af3a05d386-serving-cert\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771833 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-serving-cert\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771859 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-service-ca-bundle\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771886 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/832859dc-03b3-4b2b-9a40-75372ebb38d9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771914 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1672707-9776-4274-9bdd-4f1dabf83038-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771942 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrtv2\" (UniqueName: \"kubernetes.io/projected/52652793-dcbd-4568-a32e-b727818c61a8-kube-api-access-xrtv2\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.771984 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772012 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v9n9\" (UniqueName: \"kubernetes.io/projected/301e3f1a-19c5-47a7-b85d-81676098f971-kube-api-access-5v9n9\") pod \"downloads-7954f5f757-xwx4w\" (UID: \"301e3f1a-19c5-47a7-b85d-81676098f971\") " pod="openshift-console/downloads-7954f5f757-xwx4w" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772041 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772070 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0259775f-1fef-486a-bc17-4638e38ed83f-serving-cert\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772113 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvpd\" (UniqueName: \"kubernetes.io/projected/4c622447-70d1-4b27-b09c-6fa6402f632c-kube-api-access-8qvpd\") pod \"package-server-manager-789f6589d5-4xj5z\" (UID: \"4c622447-70d1-4b27-b09c-6fa6402f632c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772121 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.766797 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/287286a8-80b2-4c95-948d-a096153d8e51-config\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772193 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772211 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772141 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52652793-dcbd-4568-a32e-b727818c61a8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772297 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-etcd-client\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772334 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52652793-dcbd-4568-a32e-b727818c61a8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772367 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772390 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-config\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772414 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hhm\" (UniqueName: \"kubernetes.io/projected/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-kube-api-access-49hhm\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772441 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1672707-9776-4274-9bdd-4f1dabf83038-serving-cert\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772463 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-trusted-ca\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772486 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cbr9\" (UniqueName: \"kubernetes.io/projected/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-kube-api-access-9cbr9\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772505 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772530 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772608 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772630 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgqps\" (UniqueName: \"kubernetes.io/projected/f063d15e-fe05-46d3-8304-72da67594ea6-kube-api-access-dgqps\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772656 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-config\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772676 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef9e285-88a8-499d-8fb8-e4c882336e68-config\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.772697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/287286a8-80b2-4c95-948d-a096153d8e51-auth-proxy-config\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.773628 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/287286a8-80b2-4c95-948d-a096153d8e51-auth-proxy-config\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.773631 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-audit\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.769749 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-image-import-ca\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.767323 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-policies\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.775804 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ef9e285-88a8-499d-8fb8-e4c882336e68-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.776044 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-service-ca\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.769978 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-config\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.776445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-trusted-ca\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.776539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-config\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.780224 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-config\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.780455 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c9507a4-925b-418e-b824-f338cd69a66e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q44w4\" (UID: \"7c9507a4-925b-418e-b824-f338cd69a66e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.780727 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef9e285-88a8-499d-8fb8-e4c882336e68-config\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.781002 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.781094 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.781335 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1672707-9776-4274-9bdd-4f1dabf83038-serving-cert\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.781388 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.781810 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-service-ca-bundle\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.781868 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/832859dc-03b3-4b2b-9a40-75372ebb38d9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.781931 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1672707-9776-4274-9bdd-4f1dabf83038-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.782075 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.782541 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.782834 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-trusted-ca-bundle\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.783378 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a3473c0-d321-4be8-860f-bd51210cb58f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.783901 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.784190 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.784387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-config\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.784391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-client-ca\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.785015 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-config\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.785749 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-etcd-serving-ca\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.785752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0259775f-1fef-486a-bc17-4638e38ed83f-serving-cert\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.785820 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-config\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.786097 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-dir\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.786228 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/287286a8-80b2-4c95-948d-a096153d8e51-machine-approver-tls\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.787041 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.787220 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba0745e4-7610-44e2-9a65-cd3875393d64-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.787568 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4c081d-896c-4cc8-9656-59364376de35-serving-cert\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.787721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.788204 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-oauth-serving-cert\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.788630 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-client-ca\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.788683 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-crvsg"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.788718 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5d4g"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.789112 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.789202 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba0745e4-7610-44e2-9a65-cd3875393d64-node-pullsecrets\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.789171 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.789128 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-etcd-client\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.790016 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf6a704-5857-419a-bfaa-e2af3a05d386-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.790078 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.790029 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ef9e285-88a8-499d-8fb8-e4c882336e68-images\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.790278 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/832859dc-03b3-4b2b-9a40-75372ebb38d9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.790323 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.790612 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-encryption-config\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.791022 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.791145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-serving-cert\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.791792 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-oauth-config\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.792852 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.792940 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba0745e4-7610-44e2-9a65-cd3875393d64-serving-cert\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.793357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-serving-cert\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.794615 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.795532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf6a704-5857-419a-bfaa-e2af3a05d386-serving-cert\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.796320 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0a3473c0-d321-4be8-860f-bd51210cb58f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.796541 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.804625 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rkzlx"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.804722 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tx6fd"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.806662 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-znxn2"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.807571 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mwnxd"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.807784 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-znxn2" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.808907 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-znxn2"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.809029 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.812269 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mwnxd"] Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.817540 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.836932 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.873582 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-client\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.873677 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f063d15e-fe05-46d3-8304-72da67594ea6-serving-cert\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874018 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-metrics-tls\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874109 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l85r5\" (UniqueName: \"kubernetes.io/projected/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-kube-api-access-l85r5\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874178 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-ca\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874209 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c622447-70d1-4b27-b09c-6fa6402f632c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4xj5z\" (UID: \"4c622447-70d1-4b27-b09c-6fa6402f632c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874291 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-trusted-ca\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874351 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-proxy-tls\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874440 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874539 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvsj6\" (UniqueName: \"kubernetes.io/projected/c71976d9-f7ec-4258-8bf5-08a526607fd9-kube-api-access-vvsj6\") pod \"control-plane-machine-set-operator-78cbb6b69f-wdvfw\" (UID: \"c71976d9-f7ec-4258-8bf5-08a526607fd9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874606 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874707 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-service-ca\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874737 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c71976d9-f7ec-4258-8bf5-08a526607fd9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wdvfw\" (UID: \"c71976d9-f7ec-4258-8bf5-08a526607fd9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874792 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9rhv\" (UniqueName: \"kubernetes.io/projected/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-kube-api-access-t9rhv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874834 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrtv2\" (UniqueName: \"kubernetes.io/projected/52652793-dcbd-4568-a32e-b727818c61a8-kube-api-access-xrtv2\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874867 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qvpd\" (UniqueName: \"kubernetes.io/projected/4c622447-70d1-4b27-b09c-6fa6402f632c-kube-api-access-8qvpd\") pod \"package-server-manager-789f6589d5-4xj5z\" (UID: \"4c622447-70d1-4b27-b09c-6fa6402f632c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874890 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52652793-dcbd-4568-a32e-b727818c61a8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52652793-dcbd-4568-a32e-b727818c61a8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874948 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-config\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.874979 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49hhm\" (UniqueName: \"kubernetes.io/projected/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-kube-api-access-49hhm\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.875016 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.875044 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.875081 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgqps\" (UniqueName: \"kubernetes.io/projected/f063d15e-fe05-46d3-8304-72da67594ea6-kube-api-access-dgqps\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.875440 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-ca\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.876080 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-service-ca\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.876832 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f063d15e-fe05-46d3-8304-72da67594ea6-config\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.877154 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.877197 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.878778 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f063d15e-fe05-46d3-8304-72da67594ea6-serving-cert\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.879596 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f063d15e-fe05-46d3-8304-72da67594ea6-etcd-client\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.898264 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.916964 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.928383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.937257 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.947622 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.958896 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.993762 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4wk4\" (UniqueName: \"kubernetes.io/projected/ccb9403f-6f1d-4bd9-a18b-b90404cff1a0-kube-api-access-b4wk4\") pod \"apiserver-7bbb656c7d-p29lm\" (UID: \"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:54 crc kubenswrapper[4787]: I0127 07:53:54.997952 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.017140 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.037097 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.057992 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.076954 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.097622 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.113014 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.121922 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.138139 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.157794 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.176856 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.196827 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.235058 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.247536 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.257843 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-trusted-ca\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.260305 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.277064 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.295462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-metrics-tls\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.304636 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.317409 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.330736 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-proxy-tls\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.339535 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.351241 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm"] Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.357918 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: W0127 07:53:55.359887 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccb9403f_6f1d_4bd9_a18b_b90404cff1a0.slice/crio-f13f654e3279ce9d1ef580ebd7cf1cd02ccc09d334fb2570726c7eaafa2c7dda WatchSource:0}: Error finding container f13f654e3279ce9d1ef580ebd7cf1cd02ccc09d334fb2570726c7eaafa2c7dda: Status 404 returned error can't find the container with id f13f654e3279ce9d1ef580ebd7cf1cd02ccc09d334fb2570726c7eaafa2c7dda Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.376673 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.398494 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.418316 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.436403 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.457439 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.478034 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.497106 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.518040 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.538329 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.557997 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.569673 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c71976d9-f7ec-4258-8bf5-08a526607fd9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wdvfw\" (UID: \"c71976d9-f7ec-4258-8bf5-08a526607fd9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.577126 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.598041 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.608370 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c622447-70d1-4b27-b09c-6fa6402f632c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4xj5z\" (UID: \"4c622447-70d1-4b27-b09c-6fa6402f632c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.625614 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.636928 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.657058 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.677394 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.695108 4787 request.go:700] Waited for 1.000917057s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.697664 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.710054 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52652793-dcbd-4568-a32e-b727818c61a8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.717520 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.727085 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52652793-dcbd-4568-a32e-b727818c61a8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.738021 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.757712 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.777040 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.796504 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.817682 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.837530 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.857032 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.876282 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.902367 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.903007 4787 generic.go:334] "Generic (PLEG): container finished" podID="ccb9403f-6f1d-4bd9-a18b-b90404cff1a0" containerID="4db902cdd724c33c548d97b5991fae101f4cbc5f095950a14187e89bb41c4acc" exitCode=0 Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.903078 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" event={"ID":"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0","Type":"ContainerDied","Data":"4db902cdd724c33c548d97b5991fae101f4cbc5f095950a14187e89bb41c4acc"} Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.903133 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" event={"ID":"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0","Type":"ContainerStarted","Data":"f13f654e3279ce9d1ef580ebd7cf1cd02ccc09d334fb2570726c7eaafa2c7dda"} Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.917803 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.938370 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.957755 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.977672 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 07:53:55 crc kubenswrapper[4787]: I0127 07:53:55.996124 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.018789 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.037191 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.057841 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.098153 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.118473 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.137505 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.156897 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.177234 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.196880 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.217049 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.238797 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.258009 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.276943 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.297639 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.317624 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.337743 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.357628 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.379864 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.398493 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.418247 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.438426 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.478545 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgzr\" (UniqueName: \"kubernetes.io/projected/3ef9e285-88a8-499d-8fb8-e4c882336e68-kube-api-access-7kgzr\") pod \"machine-api-operator-5694c8668f-qtkc8\" (UID: \"3ef9e285-88a8-499d-8fb8-e4c882336e68\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.507811 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a3473c0-d321-4be8-860f-bd51210cb58f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.517412 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cbr9\" (UniqueName: \"kubernetes.io/projected/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-kube-api-access-9cbr9\") pod \"console-f9d7485db-qptnb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.534714 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjfh\" (UniqueName: \"kubernetes.io/projected/832859dc-03b3-4b2b-9a40-75372ebb38d9-kube-api-access-dfjfh\") pod \"openshift-apiserver-operator-796bbdcf4f-mnrsb\" (UID: \"832859dc-03b3-4b2b-9a40-75372ebb38d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.564067 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvmrj\" (UniqueName: \"kubernetes.io/projected/287286a8-80b2-4c95-948d-a096153d8e51-kube-api-access-mvmrj\") pod \"machine-approver-56656f9798-tfh8x\" (UID: \"287286a8-80b2-4c95-948d-a096153d8e51\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.566195 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.573713 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qpv\" (UniqueName: \"kubernetes.io/projected/a1672707-9776-4274-9bdd-4f1dabf83038-kube-api-access-s5qpv\") pod \"openshift-config-operator-7777fb866f-hn2j2\" (UID: \"a1672707-9776-4274-9bdd-4f1dabf83038\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.594802 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k9vf\" (UniqueName: \"kubernetes.io/projected/af87df13-9164-4e5e-99d4-5cbd9dfe80a5-kube-api-access-5k9vf\") pod \"console-operator-58897d9998-ctv89\" (UID: \"af87df13-9164-4e5e-99d4-5cbd9dfe80a5\") " pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.616324 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v9n9\" (UniqueName: \"kubernetes.io/projected/301e3f1a-19c5-47a7-b85d-81676098f971-kube-api-access-5v9n9\") pod \"downloads-7954f5f757-xwx4w\" (UID: \"301e3f1a-19c5-47a7-b85d-81676098f971\") " pod="openshift-console/downloads-7954f5f757-xwx4w" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.635272 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.645848 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp227\" (UniqueName: \"kubernetes.io/projected/0a3473c0-d321-4be8-860f-bd51210cb58f-kube-api-access-tp227\") pod \"cluster-image-registry-operator-dc59b4c8b-ln7rp\" (UID: \"0a3473c0-d321-4be8-860f-bd51210cb58f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.665662 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stktz\" (UniqueName: \"kubernetes.io/projected/0259775f-1fef-486a-bc17-4638e38ed83f-kube-api-access-stktz\") pod \"route-controller-manager-6576b87f9c-dlz8t\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.685881 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.693157 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfw62\" (UniqueName: \"kubernetes.io/projected/7c9507a4-925b-418e-b824-f338cd69a66e-kube-api-access-nfw62\") pod \"cluster-samples-operator-665b6dd947-q44w4\" (UID: \"7c9507a4-925b-418e-b824-f338cd69a66e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.694857 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkkn\" (UniqueName: \"kubernetes.io/projected/daf6a704-5857-419a-bfaa-e2af3a05d386-kube-api-access-mjkkn\") pod \"authentication-operator-69f744f599-84k56\" (UID: \"daf6a704-5857-419a-bfaa-e2af3a05d386\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.697923 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.716582 4787 request.go:700] Waited for 1.927739054s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.726957 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7r6d\" (UniqueName: \"kubernetes.io/projected/ba0745e4-7610-44e2-9a65-cd3875393d64-kube-api-access-m7r6d\") pod \"apiserver-76f77b778f-zkx8b\" (UID: \"ba0745e4-7610-44e2-9a65-cd3875393d64\") " pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.739331 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmsq\" (UniqueName: \"kubernetes.io/projected/ca4c081d-896c-4cc8-9656-59364376de35-kube-api-access-kxmsq\") pod \"controller-manager-879f6c89f-g44f6\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.739957 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.759490 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.766254 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c54z8\" (UniqueName: \"kubernetes.io/projected/fcc64739-2fd2-4413-a19e-0bc14dd883d6-kube-api-access-c54z8\") pod \"oauth-openshift-558db77b4-wdppr\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.771698 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.778146 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.797415 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.799685 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xwx4w" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.806097 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qtkc8"] Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.814849 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.817116 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.839099 4787 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.841088 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.853525 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.861583 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.881126 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.914207 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" event={"ID":"3ef9e285-88a8-499d-8fb8-e4c882336e68","Type":"ContainerStarted","Data":"76117a7f274ff7c0cffedeefc995712695a6059ef8d3e764659f3a5314b4249d"} Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.927739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" event={"ID":"ccb9403f-6f1d-4bd9-a18b-b90404cff1a0","Type":"ContainerStarted","Data":"8a1cf1b1b6790d6e608c8ba019ab5acb7570ef6c8ec5120887492e4023edb838"} Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.928235 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.936921 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.946062 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l85r5\" (UniqueName: \"kubernetes.io/projected/c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3-kube-api-access-l85r5\") pod \"machine-config-controller-84d6567774-vl4tj\" (UID: \"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.946225 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" event={"ID":"287286a8-80b2-4c95-948d-a096153d8e51","Type":"ContainerStarted","Data":"2eb2a3bdfae348a5e8e146d4add3035381139e2dcbc66f1664dad4a8791c2480"} Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.957387 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.965175 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9rhv\" (UniqueName: \"kubernetes.io/projected/3b9018b2-8e63-4c80-8c71-cc9fa7ddb853-kube-api-access-t9rhv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgw8m\" (UID: \"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.965505 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.976516 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.978488 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb"] Jan 27 07:53:56 crc kubenswrapper[4787]: I0127 07:53:56.982597 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvsj6\" (UniqueName: \"kubernetes.io/projected/c71976d9-f7ec-4258-8bf5-08a526607fd9-kube-api-access-vvsj6\") pod \"control-plane-machine-set-operator-78cbb6b69f-wdvfw\" (UID: \"c71976d9-f7ec-4258-8bf5-08a526607fd9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.003393 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.007128 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrtv2\" (UniqueName: \"kubernetes.io/projected/52652793-dcbd-4568-a32e-b727818c61a8-kube-api-access-xrtv2\") pod \"kube-storage-version-migrator-operator-b67b599dd-d9n62\" (UID: \"52652793-dcbd-4568-a32e-b727818c61a8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.026052 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.054911 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.057036 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.064281 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.065198 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49hhm\" (UniqueName: \"kubernetes.io/projected/2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5-kube-api-access-49hhm\") pod \"ingress-operator-5b745b69d9-cqdk9\" (UID: \"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.067248 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qvpd\" (UniqueName: \"kubernetes.io/projected/4c622447-70d1-4b27-b09c-6fa6402f632c-kube-api-access-8qvpd\") pod \"package-server-manager-789f6589d5-4xj5z\" (UID: \"4c622447-70d1-4b27-b09c-6fa6402f632c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.073732 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgqps\" (UniqueName: \"kubernetes.io/projected/f063d15e-fe05-46d3-8304-72da67594ea6-kube-api-access-dgqps\") pod \"etcd-operator-b45778765-mnzjr\" (UID: \"f063d15e-fe05-46d3-8304-72da67594ea6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.114538 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjp2w\" (UniqueName: \"kubernetes.io/projected/320d8560-54f7-4505-aa98-5beb6c45505e-kube-api-access-rjp2w\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124478 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-tls\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124498 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/320d8560-54f7-4505-aa98-5beb6c45505e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124516 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f4a2300-d512-490d-a876-6ad03f0c2f31-service-ca-bundle\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124604 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0e8826-188e-474f-9d5b-848886f95d1d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124621 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl2ds\" (UniqueName: \"kubernetes.io/projected/2e88d48e-6302-4a84-90a7-69446be90e4a-kube-api-access-nl2ds\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124670 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/152af819-6586-4345-b2ef-cb8ad845a6b1-secret-volume\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124708 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/320d8560-54f7-4505-aa98-5beb6c45505e-srv-cert\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124725 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad35acfc-9b56-41e7-87bd-b43f5c006dd5-metrics-tls\") pod \"dns-operator-744455d44c-t6xzm\" (UID: \"ad35acfc-9b56-41e7-87bd-b43f5c006dd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124742 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0260c856-7cbc-4fdd-887e-5b2403c05e04-node-bootstrap-token\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7x5m\" (UniqueName: \"kubernetes.io/projected/66eeb178-2611-4361-962c-a399b1f243b0-kube-api-access-j7x5m\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124810 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7727572b-c246-4b22-b3eb-14275d3d25ee-serving-cert\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124828 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124845 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7c228-ae7f-44c7-98df-78bc3949c528-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124873 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124891 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a7c228-ae7f-44c7-98df-78bc3949c528-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124907 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-images\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124932 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hpd\" (UniqueName: \"kubernetes.io/projected/5f4a2300-d512-490d-a876-6ad03f0c2f31-kube-api-access-h8hpd\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124960 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-default-certificate\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.124979 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqwl\" (UniqueName: \"kubernetes.io/projected/be571415-6ab0-4b8f-a971-3225465af110-kube-api-access-2wqwl\") pod \"migrator-59844c95c7-h4cr4\" (UID: \"be571415-6ab0-4b8f-a971-3225465af110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125011 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-stats-auth\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125029 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-certificates\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0260c856-7cbc-4fdd-887e-5b2403c05e04-certs\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125065 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-metrics-certs\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125084 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125102 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-config\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125176 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d0e8826-188e-474f-9d5b-848886f95d1d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125221 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125247 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-bound-sa-token\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125270 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1a7c228-ae7f-44c7-98df-78bc3949c528-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125314 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0e8826-188e-474f-9d5b-848886f95d1d-config\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125366 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125388 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125450 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5zw\" (UniqueName: \"kubernetes.io/projected/7727572b-c246-4b22-b3eb-14275d3d25ee-kube-api-access-fw5zw\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125476 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd9pb\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-kube-api-access-sd9pb\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125496 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66eeb178-2611-4361-962c-a399b1f243b0-srv-cert\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125514 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66eeb178-2611-4361-962c-a399b1f243b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125537 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjxfr\" (UniqueName: \"kubernetes.io/projected/ad35acfc-9b56-41e7-87bd-b43f5c006dd5-kube-api-access-mjxfr\") pod \"dns-operator-744455d44c-t6xzm\" (UID: \"ad35acfc-9b56-41e7-87bd-b43f5c006dd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125610 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125635 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6l9\" (UniqueName: \"kubernetes.io/projected/152af819-6586-4345-b2ef-cb8ad845a6b1-kube-api-access-9r6l9\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125682 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125851 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdfkh\" (UniqueName: \"kubernetes.io/projected/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-kube-api-access-vdfkh\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125884 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sczk\" (UniqueName: \"kubernetes.io/projected/0260c856-7cbc-4fdd-887e-5b2403c05e04-kube-api-access-9sczk\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125902 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-proxy-tls\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.125995 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/152af819-6586-4345-b2ef-cb8ad845a6b1-config-volume\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.126034 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7727572b-c246-4b22-b3eb-14275d3d25ee-config\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.126071 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-trusted-ca\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.127184 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:57.627170691 +0000 UTC m=+143.279526183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229276 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229522 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7727572b-c246-4b22-b3eb-14275d3d25ee-serving-cert\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229553 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7307207b-fd57-4efc-807b-7e0664873f73-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tx6fd\" (UID: \"7307207b-fd57-4efc-807b-7e0664873f73\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229606 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7c228-ae7f-44c7-98df-78bc3949c528-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229656 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a7c228-ae7f-44c7-98df-78bc3949c528-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229674 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-images\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229691 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229722 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hpd\" (UniqueName: \"kubernetes.io/projected/5f4a2300-d512-490d-a876-6ad03f0c2f31-kube-api-access-h8hpd\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229751 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-default-certificate\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229770 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7w4\" (UniqueName: \"kubernetes.io/projected/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-kube-api-access-bd7w4\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229791 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0c6406d2-db7c-458f-9c56-a640eee1f327-signing-cabundle\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229822 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-mountpoint-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229855 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wqwl\" (UniqueName: \"kubernetes.io/projected/be571415-6ab0-4b8f-a971-3225465af110-kube-api-access-2wqwl\") pod \"migrator-59844c95c7-h4cr4\" (UID: \"be571415-6ab0-4b8f-a971-3225465af110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229884 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-stats-auth\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229905 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0260c856-7cbc-4fdd-887e-5b2403c05e04-certs\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229923 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-metrics-certs\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229959 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-certificates\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.229978 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-socket-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230018 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-config\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230053 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8aecac31-d20f-4725-b563-b66c9288769d-metrics-tls\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230116 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrn4l\" (UniqueName: \"kubernetes.io/projected/7307207b-fd57-4efc-807b-7e0664873f73-kube-api-access-wrn4l\") pod \"multus-admission-controller-857f4d67dd-tx6fd\" (UID: \"7307207b-fd57-4efc-807b-7e0664873f73\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230137 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d0e8826-188e-474f-9d5b-848886f95d1d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230158 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230191 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-bound-sa-token\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230211 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rww65\" (UniqueName: \"kubernetes.io/projected/082e51f2-7ac3-4111-a40d-eb8498db9153-kube-api-access-rww65\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230242 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1a7c228-ae7f-44c7-98df-78bc3949c528-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230262 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0e8826-188e-474f-9d5b-848886f95d1d-config\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230279 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0c6406d2-db7c-458f-9c56-a640eee1f327-signing-key\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230305 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gq6w\" (UniqueName: \"kubernetes.io/projected/0c6406d2-db7c-458f-9c56-a640eee1f327-kube-api-access-9gq6w\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230351 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230371 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ce2a24d-e7e3-4c93-a46a-e914e248d6ae-cert\") pod \"ingress-canary-znxn2\" (UID: \"9ce2a24d-e7e3-4c93-a46a-e914e248d6ae\") " pod="openshift-ingress-canary/ingress-canary-znxn2" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230420 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw5zw\" (UniqueName: \"kubernetes.io/projected/7727572b-c246-4b22-b3eb-14275d3d25ee-kube-api-access-fw5zw\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230444 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tczr8\" (UniqueName: \"kubernetes.io/projected/8aecac31-d20f-4725-b563-b66c9288769d-kube-api-access-tczr8\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230463 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd9pb\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-kube-api-access-sd9pb\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230499 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66eeb178-2611-4361-962c-a399b1f243b0-srv-cert\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230519 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66eeb178-2611-4361-962c-a399b1f243b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjxfr\" (UniqueName: \"kubernetes.io/projected/ad35acfc-9b56-41e7-87bd-b43f5c006dd5-kube-api-access-mjxfr\") pod \"dns-operator-744455d44c-t6xzm\" (UID: \"ad35acfc-9b56-41e7-87bd-b43f5c006dd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230588 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6l9\" (UniqueName: \"kubernetes.io/projected/152af819-6586-4345-b2ef-cb8ad845a6b1-kube-api-access-9r6l9\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230606 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230666 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/082e51f2-7ac3-4111-a40d-eb8498db9153-tmpfs\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230711 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-plugins-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230764 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aecac31-d20f-4725-b563-b66c9288769d-config-volume\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230806 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sczk\" (UniqueName: \"kubernetes.io/projected/0260c856-7cbc-4fdd-887e-5b2403c05e04-kube-api-access-9sczk\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230826 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-proxy-tls\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230846 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdfkh\" (UniqueName: \"kubernetes.io/projected/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-kube-api-access-vdfkh\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230882 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/152af819-6586-4345-b2ef-cb8ad845a6b1-config-volume\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230914 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-trusted-ca\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230933 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7727572b-c246-4b22-b3eb-14275d3d25ee-config\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.230963 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjp2w\" (UniqueName: \"kubernetes.io/projected/320d8560-54f7-4505-aa98-5beb6c45505e-kube-api-access-rjp2w\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231001 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-tls\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231020 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f4a2300-d512-490d-a876-6ad03f0c2f31-service-ca-bundle\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231038 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/320d8560-54f7-4505-aa98-5beb6c45505e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231055 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/082e51f2-7ac3-4111-a40d-eb8498db9153-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231076 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl2ds\" (UniqueName: \"kubernetes.io/projected/2e88d48e-6302-4a84-90a7-69446be90e4a-kube-api-access-nl2ds\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231095 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0e8826-188e-474f-9d5b-848886f95d1d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231117 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rb9\" (UniqueName: \"kubernetes.io/projected/9ce2a24d-e7e3-4c93-a46a-e914e248d6ae-kube-api-access-x2rb9\") pod \"ingress-canary-znxn2\" (UID: \"9ce2a24d-e7e3-4c93-a46a-e914e248d6ae\") " pod="openshift-ingress-canary/ingress-canary-znxn2" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231144 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-registration-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231161 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/082e51f2-7ac3-4111-a40d-eb8498db9153-webhook-cert\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/152af819-6586-4345-b2ef-cb8ad845a6b1-secret-volume\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231200 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/320d8560-54f7-4505-aa98-5beb6c45505e-srv-cert\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231220 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad35acfc-9b56-41e7-87bd-b43f5c006dd5-metrics-tls\") pod \"dns-operator-744455d44c-t6xzm\" (UID: \"ad35acfc-9b56-41e7-87bd-b43f5c006dd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231237 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-csi-data-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231259 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0260c856-7cbc-4fdd-887e-5b2403c05e04-node-bootstrap-token\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.231287 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7x5m\" (UniqueName: \"kubernetes.io/projected/66eeb178-2611-4361-962c-a399b1f243b0-kube-api-access-j7x5m\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.232713 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:57.732696737 +0000 UTC m=+143.385052229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.241543 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/152af819-6586-4345-b2ef-cb8ad845a6b1-config-volume\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.242589 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.243910 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-trusted-ca\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.244887 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-config\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.245212 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7c228-ae7f-44c7-98df-78bc3949c528-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.246337 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f4a2300-d512-490d-a876-6ad03f0c2f31-service-ca-bundle\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.249138 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-tls\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.250434 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.250539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0e8826-188e-474f-9d5b-848886f95d1d-config\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.250699 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-certificates\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.251590 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-images\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.253043 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.254748 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.256151 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7727572b-c246-4b22-b3eb-14275d3d25ee-config\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.256725 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7727572b-c246-4b22-b3eb-14275d3d25ee-serving-cert\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.258917 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.266420 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.271038 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66eeb178-2611-4361-962c-a399b1f243b0-srv-cert\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.274158 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/320d8560-54f7-4505-aa98-5beb6c45505e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.277217 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0e8826-188e-474f-9d5b-848886f95d1d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.284218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66eeb178-2611-4361-962c-a399b1f243b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.284411 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl2ds\" (UniqueName: \"kubernetes.io/projected/2e88d48e-6302-4a84-90a7-69446be90e4a-kube-api-access-nl2ds\") pod \"marketplace-operator-79b997595-h5d4g\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.285172 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0260c856-7cbc-4fdd-887e-5b2403c05e04-certs\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.285456 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-stats-auth\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.286072 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-proxy-tls\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.288787 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.289973 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qptnb"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.292286 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0260c856-7cbc-4fdd-887e-5b2403c05e04-node-bootstrap-token\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.293223 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad35acfc-9b56-41e7-87bd-b43f5c006dd5-metrics-tls\") pod \"dns-operator-744455d44c-t6xzm\" (UID: \"ad35acfc-9b56-41e7-87bd-b43f5c006dd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.293770 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/152af819-6586-4345-b2ef-cb8ad845a6b1-secret-volume\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.293914 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/320d8560-54f7-4505-aa98-5beb6c45505e-srv-cert\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.295834 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.296528 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a7c228-ae7f-44c7-98df-78bc3949c528-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.298416 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-default-certificate\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.300909 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4a2300-d512-490d-a876-6ad03f0c2f31-metrics-certs\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.303284 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd9pb\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-kube-api-access-sd9pb\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.315483 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wqwl\" (UniqueName: \"kubernetes.io/projected/be571415-6ab0-4b8f-a971-3225465af110-kube-api-access-2wqwl\") pod \"migrator-59844c95c7-h4cr4\" (UID: \"be571415-6ab0-4b8f-a971-3225465af110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.338775 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7307207b-fd57-4efc-807b-7e0664873f73-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tx6fd\" (UID: \"7307207b-fd57-4efc-807b-7e0664873f73\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354392 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd7w4\" (UniqueName: \"kubernetes.io/projected/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-kube-api-access-bd7w4\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354430 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-mountpoint-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354456 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0c6406d2-db7c-458f-9c56-a640eee1f327-signing-cabundle\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354504 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-socket-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354537 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8aecac31-d20f-4725-b563-b66c9288769d-metrics-tls\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354584 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrn4l\" (UniqueName: \"kubernetes.io/projected/7307207b-fd57-4efc-807b-7e0664873f73-kube-api-access-wrn4l\") pod \"multus-admission-controller-857f4d67dd-tx6fd\" (UID: \"7307207b-fd57-4efc-807b-7e0664873f73\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rww65\" (UniqueName: \"kubernetes.io/projected/082e51f2-7ac3-4111-a40d-eb8498db9153-kube-api-access-rww65\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0c6406d2-db7c-458f-9c56-a640eee1f327-signing-key\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354742 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gq6w\" (UniqueName: \"kubernetes.io/projected/0c6406d2-db7c-458f-9c56-a640eee1f327-kube-api-access-9gq6w\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354767 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-mountpoint-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.355756 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0c6406d2-db7c-458f-9c56-a640eee1f327-signing-cabundle\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.355853 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7307207b-fd57-4efc-807b-7e0664873f73-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tx6fd\" (UID: \"7307207b-fd57-4efc-807b-7e0664873f73\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.345554 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.356401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-socket-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.349881 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d0e8826-188e-474f-9d5b-848886f95d1d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j6mnr\" (UID: \"8d0e8826-188e-474f-9d5b-848886f95d1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.354794 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ce2a24d-e7e3-4c93-a46a-e914e248d6ae-cert\") pod \"ingress-canary-znxn2\" (UID: \"9ce2a24d-e7e3-4c93-a46a-e914e248d6ae\") " pod="openshift-ingress-canary/ingress-canary-znxn2" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.357368 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tczr8\" (UniqueName: \"kubernetes.io/projected/8aecac31-d20f-4725-b563-b66c9288769d-kube-api-access-tczr8\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.357465 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.357642 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/082e51f2-7ac3-4111-a40d-eb8498db9153-tmpfs\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.357699 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-plugins-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.357768 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aecac31-d20f-4725-b563-b66c9288769d-config-volume\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.357904 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/082e51f2-7ac3-4111-a40d-eb8498db9153-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.357966 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rb9\" (UniqueName: \"kubernetes.io/projected/9ce2a24d-e7e3-4c93-a46a-e914e248d6ae-kube-api-access-x2rb9\") pod \"ingress-canary-znxn2\" (UID: \"9ce2a24d-e7e3-4c93-a46a-e914e248d6ae\") " pod="openshift-ingress-canary/ingress-canary-znxn2" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.357997 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-registration-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.358035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/082e51f2-7ac3-4111-a40d-eb8498db9153-webhook-cert\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.358078 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-csi-data-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.358125 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:57.858095124 +0000 UTC m=+143.510450616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.358215 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-registration-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.358304 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-csi-data-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.358447 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-plugins-dir\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.358819 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/082e51f2-7ac3-4111-a40d-eb8498db9153-tmpfs\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.363816 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.364056 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/082e51f2-7ac3-4111-a40d-eb8498db9153-webhook-cert\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.368578 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/082e51f2-7ac3-4111-a40d-eb8498db9153-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.369002 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.382412 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-bound-sa-token\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.384793 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw5zw\" (UniqueName: \"kubernetes.io/projected/7727572b-c246-4b22-b3eb-14275d3d25ee-kube-api-access-fw5zw\") pod \"service-ca-operator-777779d784-xf9dv\" (UID: \"7727572b-c246-4b22-b3eb-14275d3d25ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.385374 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aecac31-d20f-4725-b563-b66c9288769d-config-volume\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.385799 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ce2a24d-e7e3-4c93-a46a-e914e248d6ae-cert\") pod \"ingress-canary-znxn2\" (UID: \"9ce2a24d-e7e3-4c93-a46a-e914e248d6ae\") " pod="openshift-ingress-canary/ingress-canary-znxn2" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.385946 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0c6406d2-db7c-458f-9c56-a640eee1f327-signing-key\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.396801 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8b5jc\" (UID: \"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.434131 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-84k56"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.434629 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xwx4w"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.437662 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1a7c228-ae7f-44c7-98df-78bc3949c528-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wv4z7\" (UID: \"c1a7c228-ae7f-44c7-98df-78bc3949c528\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.447210 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7x5m\" (UniqueName: \"kubernetes.io/projected/66eeb178-2611-4361-962c-a399b1f243b0-kube-api-access-j7x5m\") pod \"catalog-operator-68c6474976-cgsgg\" (UID: \"66eeb178-2611-4361-962c-a399b1f243b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.453387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8aecac31-d20f-4725-b563-b66c9288769d-metrics-tls\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.455473 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdfkh\" (UniqueName: \"kubernetes.io/projected/cd10b89a-565a-4ae6-b7dd-4bac183ece8c-kube-api-access-vdfkh\") pod \"machine-config-operator-74547568cd-m7czl\" (UID: \"cd10b89a-565a-4ae6-b7dd-4bac183ece8c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.459273 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.459420 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:57.959397784 +0000 UTC m=+143.611753276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.460148 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.462407 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:57.962395975 +0000 UTC m=+143.614751467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.479644 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hpd\" (UniqueName: \"kubernetes.io/projected/5f4a2300-d512-490d-a876-6ad03f0c2f31-kube-api-access-h8hpd\") pod \"router-default-5444994796-jzdds\" (UID: \"5f4a2300-d512-490d-a876-6ad03f0c2f31\") " pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.498678 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sczk\" (UniqueName: \"kubernetes.io/projected/0260c856-7cbc-4fdd-887e-5b2403c05e04-kube-api-access-9sczk\") pod \"machine-config-server-9xlcr\" (UID: \"0260c856-7cbc-4fdd-887e-5b2403c05e04\") " pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.533520 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjxfr\" (UniqueName: \"kubernetes.io/projected/ad35acfc-9b56-41e7-87bd-b43f5c006dd5-kube-api-access-mjxfr\") pod \"dns-operator-744455d44c-t6xzm\" (UID: \"ad35acfc-9b56-41e7-87bd-b43f5c006dd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.540422 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6l9\" (UniqueName: \"kubernetes.io/projected/152af819-6586-4345-b2ef-cb8ad845a6b1-kube-api-access-9r6l9\") pod \"collect-profiles-29491665-p955p\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.542664 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.547874 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wdppr"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.561044 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.561288 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjp2w\" (UniqueName: \"kubernetes.io/projected/320d8560-54f7-4505-aa98-5beb6c45505e-kube-api-access-rjp2w\") pod \"olm-operator-6b444d44fb-vm6dc\" (UID: \"320d8560-54f7-4505-aa98-5beb6c45505e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.561609 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.061592537 +0000 UTC m=+143.713948029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.561882 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ctv89"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.571164 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.577821 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.594253 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd7w4\" (UniqueName: \"kubernetes.io/projected/f72f30c1-aeab-47d1-b353-5ea33af8eb6b-kube-api-access-bd7w4\") pod \"csi-hostpathplugin-mwnxd\" (UID: \"f72f30c1-aeab-47d1-b353-5ea33af8eb6b\") " pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.594649 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.610881 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.618070 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.619112 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rww65\" (UniqueName: \"kubernetes.io/projected/082e51f2-7ac3-4111-a40d-eb8498db9153-kube-api-access-rww65\") pod \"packageserver-d55dfcdfc-kqzdm\" (UID: \"082e51f2-7ac3-4111-a40d-eb8498db9153\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.632836 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.633172 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrn4l\" (UniqueName: \"kubernetes.io/projected/7307207b-fd57-4efc-807b-7e0664873f73-kube-api-access-wrn4l\") pod \"multus-admission-controller-857f4d67dd-tx6fd\" (UID: \"7307207b-fd57-4efc-807b-7e0664873f73\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.649811 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:53:57 crc kubenswrapper[4787]: W0127 07:53:57.656215 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod301e3f1a_19c5_47a7_b85d_81676098f971.slice/crio-6731ad2fbbabd13c861a61f57512d12b0f1187c3b7aa0f5df8f6ddfffbda1def WatchSource:0}: Error finding container 6731ad2fbbabd13c861a61f57512d12b0f1187c3b7aa0f5df8f6ddfffbda1def: Status 404 returned error can't find the container with id 6731ad2fbbabd13c861a61f57512d12b0f1187c3b7aa0f5df8f6ddfffbda1def Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.663323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.663945 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.163920964 +0000 UTC m=+143.816276466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.678748 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.680027 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tczr8\" (UniqueName: \"kubernetes.io/projected/8aecac31-d20f-4725-b563-b66c9288769d-kube-api-access-tczr8\") pod \"dns-default-rkzlx\" (UID: \"8aecac31-d20f-4725-b563-b66c9288769d\") " pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.696940 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gq6w\" (UniqueName: \"kubernetes.io/projected/0c6406d2-db7c-458f-9c56-a640eee1f327-kube-api-access-9gq6w\") pod \"service-ca-9c57cc56f-crvsg\" (UID: \"0c6406d2-db7c-458f-9c56-a640eee1f327\") " pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.697133 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9xlcr" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.697953 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.709630 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:53:57 crc kubenswrapper[4787]: W0127 07:53:57.711958 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf87df13_9164_4e5e_99d4_5cbd9dfe80a5.slice/crio-1253909e5ce3f2d18f3f22b50a5ae3f7abf4818cee9931b19e52db7e0faacf6f WatchSource:0}: Error finding container 1253909e5ce3f2d18f3f22b50a5ae3f7abf4818cee9931b19e52db7e0faacf6f: Status 404 returned error can't find the container with id 1253909e5ce3f2d18f3f22b50a5ae3f7abf4818cee9931b19e52db7e0faacf6f Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.721963 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.730604 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rb9\" (UniqueName: \"kubernetes.io/projected/9ce2a24d-e7e3-4c93-a46a-e914e248d6ae-kube-api-access-x2rb9\") pod \"ingress-canary-znxn2\" (UID: \"9ce2a24d-e7e3-4c93-a46a-e914e248d6ae\") " pod="openshift-ingress-canary/ingress-canary-znxn2" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.739447 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.741474 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.742072 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rkzlx" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.748009 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-znxn2" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.749707 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zkx8b"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.765265 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.765436 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.26540961 +0000 UTC m=+143.917765102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.765775 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.766185 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.266171109 +0000 UTC m=+143.918526601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.784891 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.857843 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.868355 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.868500 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.368467396 +0000 UTC m=+144.020822888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.868986 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.872534 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.372505946 +0000 UTC m=+144.024861438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.891625 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.898670 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g44f6"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.900931 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4"] Jan 27 07:53:57 crc kubenswrapper[4787]: W0127 07:53:57.929310 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7ed2e1c_c815_45e4_a5a1_e9006de2e8b3.slice/crio-4d9aee465b02072995de81211f3df545d4447c81d7c2d1218c594024fa8a620b WatchSource:0}: Error finding container 4d9aee465b02072995de81211f3df545d4447c81d7c2d1218c594024fa8a620b: Status 404 returned error can't find the container with id 4d9aee465b02072995de81211f3df545d4447c81d7c2d1218c594024fa8a620b Jan 27 07:53:57 crc kubenswrapper[4787]: W0127 07:53:57.930186 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca4c081d_896c_4cc8_9656_59364376de35.slice/crio-7128b0f800ba7ef260edfd0bbe8553abf59eec44713d7f401d1f64afeed28b63 WatchSource:0}: Error finding container 7128b0f800ba7ef260edfd0bbe8553abf59eec44713d7f401d1f64afeed28b63: Status 404 returned error can't find the container with id 7128b0f800ba7ef260edfd0bbe8553abf59eec44713d7f401d1f64afeed28b63 Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.945609 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.947453 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5d4g"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.974345 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m"] Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.974626 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:57 crc kubenswrapper[4787]: E0127 07:53:57.975172 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.475149995 +0000 UTC m=+144.127505487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:57 crc kubenswrapper[4787]: W0127 07:53:57.980184 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a3473c0_d321_4be8_860f_bd51210cb58f.slice/crio-55117423bdc9eccf4aba0a0c979538af0f4d46f9fbff80211848862b36fddf17 WatchSource:0}: Error finding container 55117423bdc9eccf4aba0a0c979538af0f4d46f9fbff80211848862b36fddf17: Status 404 returned error can't find the container with id 55117423bdc9eccf4aba0a0c979538af0f4d46f9fbff80211848862b36fddf17 Jan 27 07:53:57 crc kubenswrapper[4787]: I0127 07:53:57.995320 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.019697 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" event={"ID":"daf6a704-5857-419a-bfaa-e2af3a05d386","Type":"ContainerStarted","Data":"efc811c1725740bff5cb730eaca2c2912ccabab6aa80552f77f629591a8efadb"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.023900 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mnzjr"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.081220 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.082630 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.582555893 +0000 UTC m=+144.234911385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.086524 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" event={"ID":"0259775f-1fef-486a-bc17-4638e38ed83f","Type":"ContainerStarted","Data":"2b47bd63962bfe1d1f2626a0146931ca7d71c50c1e9a2b870fbb6f7f3fafcb6f"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.086667 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" event={"ID":"0259775f-1fef-486a-bc17-4638e38ed83f","Type":"ContainerStarted","Data":"41afcd3d4f1c7a6dc4b744f46a303d0903dfaf2cc98bd0b97df786bbe6ad0ce1"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.086752 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.089959 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.090237 4787 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dlz8t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.090321 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" podUID="0259775f-1fef-486a-bc17-4638e38ed83f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 07:53:58 crc kubenswrapper[4787]: W0127 07:53:58.093554 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e88d48e_6302_4a84_90a7_69446be90e4a.slice/crio-080593f490256d1b9111cdbbbfb3bf578e73f07bd416e1297c741853a022c9d0 WatchSource:0}: Error finding container 080593f490256d1b9111cdbbbfb3bf578e73f07bd416e1297c741853a022c9d0: Status 404 returned error can't find the container with id 080593f490256d1b9111cdbbbfb3bf578e73f07bd416e1297c741853a022c9d0 Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.098398 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qptnb" event={"ID":"b070600e-8a6f-4bb9-a1c2-e763f55d90eb","Type":"ContainerStarted","Data":"29d34fb56a5ea9d4aa885d1db94f1f656d15f31f013518446e7a295dc5daddbd"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.101034 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xwx4w" event={"ID":"301e3f1a-19c5-47a7-b85d-81676098f971","Type":"ContainerStarted","Data":"6731ad2fbbabd13c861a61f57512d12b0f1187c3b7aa0f5df8f6ddfffbda1def"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.102999 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" event={"ID":"a1672707-9776-4274-9bdd-4f1dabf83038","Type":"ContainerStarted","Data":"a241f6eac7b737e993b87660c239179a420906d8f785dac6b9fd6e0bdd6c3bee"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.103030 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" event={"ID":"a1672707-9776-4274-9bdd-4f1dabf83038","Type":"ContainerStarted","Data":"3eee0f59d1eca1b9d315c94eeaac740410d9f704d52695edb13bb727c96e9822"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.107821 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ctv89" event={"ID":"af87df13-9164-4e5e-99d4-5cbd9dfe80a5","Type":"ContainerStarted","Data":"1253909e5ce3f2d18f3f22b50a5ae3f7abf4818cee9931b19e52db7e0faacf6f"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.111589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" event={"ID":"ba0745e4-7610-44e2-9a65-cd3875393d64","Type":"ContainerStarted","Data":"0ce20a753e35b3f0cbb05a1acb9083e2b9b1eb4f0f0e0922aab81e6d4e6a231b"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.112372 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" event={"ID":"ca4c081d-896c-4cc8-9656-59364376de35","Type":"ContainerStarted","Data":"7128b0f800ba7ef260edfd0bbe8553abf59eec44713d7f401d1f64afeed28b63"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.130701 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.174780 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" event={"ID":"832859dc-03b3-4b2b-9a40-75372ebb38d9","Type":"ContainerStarted","Data":"174043ae4eb3f27b2a02f8f0d458a2ba711820a4ca7d210e32ef6ae43cdd1bc9"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.174851 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" event={"ID":"832859dc-03b3-4b2b-9a40-75372ebb38d9","Type":"ContainerStarted","Data":"7f42185c5aef7dce4e2e399f8d91331294ff845005ceee90fd08fcb40774b6f0"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.185850 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.188027 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.688002046 +0000 UTC m=+144.340357538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.191618 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" event={"ID":"287286a8-80b2-4c95-948d-a096153d8e51","Type":"ContainerStarted","Data":"8c17a3ca80b645ddcd4da67a398d34fa5868490459331646e4490ab4755aca07"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.194416 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" event={"ID":"fcc64739-2fd2-4413-a19e-0bc14dd883d6","Type":"ContainerStarted","Data":"0586f60c3abf6a05bb8ec725bce60998408f815b2354ed56f05b7be6dbc58bdc"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.195887 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" event={"ID":"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3","Type":"ContainerStarted","Data":"4d9aee465b02072995de81211f3df545d4447c81d7c2d1218c594024fa8a620b"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.199979 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" event={"ID":"3ef9e285-88a8-499d-8fb8-e4c882336e68","Type":"ContainerStarted","Data":"211ad6fc0e133b1fdefabe36246157cc21a82111703439f2ce1c112f11c8892e"} Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.254619 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t6xzm"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.287874 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.289839 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.789821335 +0000 UTC m=+144.442176827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: W0127 07:53:58.345801 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f4a2300_d512_490d_a876_6ad03f0c2f31.slice/crio-f447b97f79c24f1fa86dc633d9ade1d3765dd9b8fca0314cad47652ceecd0c98 WatchSource:0}: Error finding container f447b97f79c24f1fa86dc633d9ade1d3765dd9b8fca0314cad47652ceecd0c98: Status 404 returned error can't find the container with id f447b97f79c24f1fa86dc633d9ade1d3765dd9b8fca0314cad47652ceecd0c98 Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.394938 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.396736 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:58.896712383 +0000 UTC m=+144.549067875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.503809 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.505648 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.005634736 +0000 UTC m=+144.657990228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.532052 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" podStartSLOduration=119.532027528 podStartE2EDuration="1m59.532027528s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:58.530706939 +0000 UTC m=+144.183062431" watchObservedRunningTime="2026-01-27 07:53:58.532027528 +0000 UTC m=+144.184383030" Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.605140 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.614677 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.114635802 +0000 UTC m=+144.766991484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.679927 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.685209 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-crvsg"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.711980 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.712493 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.212469703 +0000 UTC m=+144.864825385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.818403 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.820411 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.320371358 +0000 UTC m=+144.972726850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.820974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.822308 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.322279139 +0000 UTC m=+144.974634631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.917308 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.923015 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:58 crc kubenswrapper[4787]: E0127 07:53:58.923475 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.423446804 +0000 UTC m=+145.075802296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.930442 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p"] Jan 27 07:53:58 crc kubenswrapper[4787]: I0127 07:53:58.934416 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr"] Jan 27 07:53:58 crc kubenswrapper[4787]: W0127 07:53:58.944966 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod770c6bc0_a6c4_42b9_b21e_d1c36ed7b8bf.slice/crio-d296ddc6bf8d7b5468444ef4917e9aa0c22406552d50e8139100bebefd2fb14f WatchSource:0}: Error finding container d296ddc6bf8d7b5468444ef4917e9aa0c22406552d50e8139100bebefd2fb14f: Status 404 returned error can't find the container with id d296ddc6bf8d7b5468444ef4917e9aa0c22406552d50e8139100bebefd2fb14f Jan 27 07:53:58 crc kubenswrapper[4787]: W0127 07:53:58.946110 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6406d2_db7c_458f_9c56_a640eee1f327.slice/crio-0dc71c20f37b8087257a9fe017b8da119943fdfe4921d6702b4b2d71f52e1025 WatchSource:0}: Error finding container 0dc71c20f37b8087257a9fe017b8da119943fdfe4921d6702b4b2d71f52e1025: Status 404 returned error can't find the container with id 0dc71c20f37b8087257a9fe017b8da119943fdfe4921d6702b4b2d71f52e1025 Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.024416 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.024879 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.524860458 +0000 UTC m=+145.177215950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.131905 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.132668 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.632641169 +0000 UTC m=+145.284996661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.157971 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-znxn2"] Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.205305 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" event={"ID":"152af819-6586-4345-b2ef-cb8ad845a6b1","Type":"ContainerStarted","Data":"dd1d962cbba22622cc8f00742deb5445d3f8c1e1a6d321d913c3f39997ffb3b5"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.212652 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" event={"ID":"4c622447-70d1-4b27-b09c-6fa6402f632c","Type":"ContainerStarted","Data":"1b5bf904f5f1e8f79e8efc5a22b9f48d32b5fc1567f843d15b652083cc0e3ea5"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.214400 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" event={"ID":"2e88d48e-6302-4a84-90a7-69446be90e4a","Type":"ContainerStarted","Data":"080593f490256d1b9111cdbbbfb3bf578e73f07bd416e1297c741853a022c9d0"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.215708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" event={"ID":"0c6406d2-db7c-458f-9c56-a640eee1f327","Type":"ContainerStarted","Data":"0dc71c20f37b8087257a9fe017b8da119943fdfe4921d6702b4b2d71f52e1025"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.217668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" event={"ID":"52652793-dcbd-4568-a32e-b727818c61a8","Type":"ContainerStarted","Data":"4adb5d9bef3b963fd885398b5383883b4aaac905cc20d93dcd9e98bb57cfe2cc"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.221556 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qptnb" event={"ID":"b070600e-8a6f-4bb9-a1c2-e763f55d90eb","Type":"ContainerStarted","Data":"310ccc96bfc18eaae7b0abfc190b26a49989ec96f4c419c7eab3252a1004a577"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.224035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" event={"ID":"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5","Type":"ContainerStarted","Data":"76cab52707c7779aa6ca2f62da481161aabf68d198a8565ceee652ad4d69cfda"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.227275 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" event={"ID":"0a3473c0-d321-4be8-860f-bd51210cb58f","Type":"ContainerStarted","Data":"55117423bdc9eccf4aba0a0c979538af0f4d46f9fbff80211848862b36fddf17"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.235255 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.237416 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.737402747 +0000 UTC m=+145.389758239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.240068 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" event={"ID":"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf","Type":"ContainerStarted","Data":"d296ddc6bf8d7b5468444ef4917e9aa0c22406552d50e8139100bebefd2fb14f"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.247376 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg"] Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.307502 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7"] Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.311144 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl"] Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.317951 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" event={"ID":"7c9507a4-925b-418e-b824-f338cd69a66e","Type":"ContainerStarted","Data":"06fffec946eb75065833bf8dcef868c04446ce7888e95fb235bcfbf091cabc99"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.322864 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc"] Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.331870 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9xlcr" event={"ID":"0260c856-7cbc-4fdd-887e-5b2403c05e04","Type":"ContainerStarted","Data":"6e504fe6448ffaf4082c3f9e3c16238131e32ca5463065c645aaaac7c9bda8a8"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.336985 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.337707 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.837686858 +0000 UTC m=+145.490042350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.344332 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" event={"ID":"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853","Type":"ContainerStarted","Data":"e5bf6924b44ba6b2dd6b844e67c0f5dd6bfcc887f40629cb15547e94d18588c8"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.370782 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" event={"ID":"fcc64739-2fd2-4413-a19e-0bc14dd883d6","Type":"ContainerStarted","Data":"2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.371702 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.394033 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rkzlx"] Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.394460 4787 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wdppr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.394545 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" podUID="fcc64739-2fd2-4413-a19e-0bc14dd883d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.422383 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" event={"ID":"8d0e8826-188e-474f-9d5b-848886f95d1d","Type":"ContainerStarted","Data":"4a4dbca16def5f4b6df5963753724a39ea3f46877dae1dfff809374832b9f8ee"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.462766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.463390 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:53:59.963375486 +0000 UTC m=+145.615730978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.466937 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xwx4w" event={"ID":"301e3f1a-19c5-47a7-b85d-81676098f971","Type":"ContainerStarted","Data":"a84ca8c27c52e9d15e05d7ca98315d7c9e0320b3daccf98569865c02ff90e75e"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.472692 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tx6fd"] Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.475334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" event={"ID":"be571415-6ab0-4b8f-a971-3225465af110","Type":"ContainerStarted","Data":"e84cb2bc08f882da1b1db7f6845964928adbf9c01abbc37e3a8c94350e9ddbde"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.479592 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm"] Jan 27 07:53:59 crc kubenswrapper[4787]: W0127 07:53:59.482064 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7307207b_fd57_4efc_807b_7e0664873f73.slice/crio-f0e097bbddf99a5cd749191fafe89c657d9a183c594c798eb00ebbd8cd926a52 WatchSource:0}: Error finding container f0e097bbddf99a5cd749191fafe89c657d9a183c594c798eb00ebbd8cd926a52: Status 404 returned error can't find the container with id f0e097bbddf99a5cd749191fafe89c657d9a183c594c798eb00ebbd8cd926a52 Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.487720 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mnrsb" podStartSLOduration=121.487699401 podStartE2EDuration="2m1.487699401s" podCreationTimestamp="2026-01-27 07:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.413600934 +0000 UTC m=+145.065956436" watchObservedRunningTime="2026-01-27 07:53:59.487699401 +0000 UTC m=+145.140054893" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.491458 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mwnxd"] Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.495607 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qptnb" podStartSLOduration=120.495579454 podStartE2EDuration="2m0.495579454s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.462241203 +0000 UTC m=+145.114596715" watchObservedRunningTime="2026-01-27 07:53:59.495579454 +0000 UTC m=+145.147934956" Jan 27 07:53:59 crc kubenswrapper[4787]: W0127 07:53:59.507724 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8aecac31_d20f_4725_b563_b66c9288769d.slice/crio-fcee71dd80f35d712ce58fb88f374f8864ba0291f70a83880cf45673e0387f68 WatchSource:0}: Error finding container fcee71dd80f35d712ce58fb88f374f8864ba0291f70a83880cf45673e0387f68: Status 404 returned error can't find the container with id fcee71dd80f35d712ce58fb88f374f8864ba0291f70a83880cf45673e0387f68 Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.508386 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" event={"ID":"c71976d9-f7ec-4258-8bf5-08a526607fd9","Type":"ContainerStarted","Data":"10240c10423ac37b2e46f6cea0da36c5c4a81e6b1447823d6ef1b050af4ab495"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.508426 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" event={"ID":"c71976d9-f7ec-4258-8bf5-08a526607fd9","Type":"ContainerStarted","Data":"51626a0e171c125a2efc326d5536f02b3cb13c75048e102d07ec3d28dd1d93be"} Jan 27 07:53:59 crc kubenswrapper[4787]: W0127 07:53:59.526443 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod082e51f2_7ac3_4111_a40d_eb8498db9153.slice/crio-e34965f2d6255a1e1c7d97589f31370ca5d0fbe322d30e475ba8e52245af1f1f WatchSource:0}: Error finding container e34965f2d6255a1e1c7d97589f31370ca5d0fbe322d30e475ba8e52245af1f1f: Status 404 returned error can't find the container with id e34965f2d6255a1e1c7d97589f31370ca5d0fbe322d30e475ba8e52245af1f1f Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.526681 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" event={"ID":"3ef9e285-88a8-499d-8fb8-e4c882336e68","Type":"ContainerStarted","Data":"66a65b0119f8122a1c534ff54390fff9ee7f15a69cfd1201d2b9050f003e051d"} Jan 27 07:53:59 crc kubenswrapper[4787]: W0127 07:53:59.537473 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf72f30c1_aeab_47d1_b353_5ea33af8eb6b.slice/crio-c39bdf5bbcb945f4a4464c0c5e574578ec8fec6904d0e3147e0c0840df9c02b1 WatchSource:0}: Error finding container c39bdf5bbcb945f4a4464c0c5e574578ec8fec6904d0e3147e0c0840df9c02b1: Status 404 returned error can't find the container with id c39bdf5bbcb945f4a4464c0c5e574578ec8fec6904d0e3147e0c0840df9c02b1 Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.541402 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jzdds" event={"ID":"5f4a2300-d512-490d-a876-6ad03f0c2f31","Type":"ContainerStarted","Data":"f447b97f79c24f1fa86dc633d9ade1d3765dd9b8fca0314cad47652ceecd0c98"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.545037 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" event={"ID":"7727572b-c246-4b22-b3eb-14275d3d25ee","Type":"ContainerStarted","Data":"ec8b003891116f6dd3ec5d857235c290937ddaa356d64a376a5c322f76b6e81f"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.549847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" event={"ID":"a1672707-9776-4274-9bdd-4f1dabf83038","Type":"ContainerDied","Data":"a241f6eac7b737e993b87660c239179a420906d8f785dac6b9fd6e0bdd6c3bee"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.549805 4787 generic.go:334] "Generic (PLEG): container finished" podID="a1672707-9776-4274-9bdd-4f1dabf83038" containerID="a241f6eac7b737e993b87660c239179a420906d8f785dac6b9fd6e0bdd6c3bee" exitCode=0 Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.550161 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.554531 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" event={"ID":"ad35acfc-9b56-41e7-87bd-b43f5c006dd5","Type":"ContainerStarted","Data":"bc310fbc088a63b3a67f1c14ce3f1ec9954daeae6116c3ccad21af3b97a8d451"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.558293 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" event={"ID":"f063d15e-fe05-46d3-8304-72da67594ea6","Type":"ContainerStarted","Data":"5366bb02b1eb923307814fe987825369e24038367d69657c58f848bd86f88189"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.564815 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" event={"ID":"daf6a704-5857-419a-bfaa-e2af3a05d386","Type":"ContainerStarted","Data":"e4729d098a219b930fcb6407bf849e979362b747266527d8c556883588f57a0a"} Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.567478 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.569026 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.069003017 +0000 UTC m=+145.721358509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.569100 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" podStartSLOduration=120.56908432 podStartE2EDuration="2m0.56908432s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.534219772 +0000 UTC m=+145.186575284" watchObservedRunningTime="2026-01-27 07:53:59.56908432 +0000 UTC m=+145.221439812" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.583937 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.618535 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xwx4w" podStartSLOduration=120.618513249 podStartE2EDuration="2m0.618513249s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.615705974 +0000 UTC m=+145.268061476" watchObservedRunningTime="2026-01-27 07:53:59.618513249 +0000 UTC m=+145.270868751" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.628251 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wdvfw" podStartSLOduration=120.62822947 podStartE2EDuration="2m0.62822947s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.570524533 +0000 UTC m=+145.222880025" watchObservedRunningTime="2026-01-27 07:53:59.62822947 +0000 UTC m=+145.280584962" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.660573 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-qtkc8" podStartSLOduration=120.660527642 podStartE2EDuration="2m0.660527642s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.660389667 +0000 UTC m=+145.312745159" watchObservedRunningTime="2026-01-27 07:53:59.660527642 +0000 UTC m=+145.312883134" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.669862 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.672525 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.172507067 +0000 UTC m=+145.824862559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.696754 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" podStartSLOduration=121.69673228 podStartE2EDuration="2m1.69673228s" podCreationTimestamp="2026-01-27 07:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.69488349 +0000 UTC m=+145.347238982" watchObservedRunningTime="2026-01-27 07:53:59.69673228 +0000 UTC m=+145.349087772" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.771483 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.773038 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.273012088 +0000 UTC m=+145.925367580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.776692 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.777238 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.277220534 +0000 UTC m=+145.929576026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.778932 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-84k56" podStartSLOduration=121.778915637 podStartE2EDuration="2m1.778915637s" podCreationTimestamp="2026-01-27 07:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.732006392 +0000 UTC m=+145.384361884" watchObservedRunningTime="2026-01-27 07:53:59.778915637 +0000 UTC m=+145.431271129" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.813306 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" podStartSLOduration=120.813279946 podStartE2EDuration="2m0.813279946s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:53:59.784989694 +0000 UTC m=+145.437345206" watchObservedRunningTime="2026-01-27 07:53:59.813279946 +0000 UTC m=+145.465635448" Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.877764 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.878135 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.378115879 +0000 UTC m=+146.030471371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:53:59 crc kubenswrapper[4787]: I0127 07:53:59.981387 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:53:59 crc kubenswrapper[4787]: E0127 07:53:59.981898 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.48188167 +0000 UTC m=+146.134237162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.082725 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.082988 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.582952782 +0000 UTC m=+146.235308264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.083580 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.084020 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.58400196 +0000 UTC m=+146.236357452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.114436 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.114532 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.151607 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.186223 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.186401 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.686358599 +0000 UTC m=+146.338714091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.186672 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.188540 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.68851888 +0000 UTC m=+146.340874552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.287887 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.288212 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.788170408 +0000 UTC m=+146.440525900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.288457 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.288984 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.788967398 +0000 UTC m=+146.441322890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.389716 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.389897 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.889870263 +0000 UTC m=+146.542225755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.390464 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.391000 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.890980844 +0000 UTC m=+146.543336336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.492102 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.492324 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.992303524 +0000 UTC m=+146.644659016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.492815 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.493619 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:00.993580802 +0000 UTC m=+146.645936464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.594081 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.594275 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.094245648 +0000 UTC m=+146.746601140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.594439 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.594873 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.09485614 +0000 UTC m=+146.747211632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.604767 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" event={"ID":"2e88d48e-6302-4a84-90a7-69446be90e4a","Type":"ContainerStarted","Data":"380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.606312 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.607922 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" event={"ID":"082e51f2-7ac3-4111-a40d-eb8498db9153","Type":"ContainerStarted","Data":"e34965f2d6255a1e1c7d97589f31370ca5d0fbe322d30e475ba8e52245af1f1f"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.618781 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h5d4g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.618835 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" podUID="2e88d48e-6302-4a84-90a7-69446be90e4a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.633803 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" event={"ID":"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3","Type":"ContainerStarted","Data":"318390bc28506e850375d5d6709217fc4fe4d18dd8f418be899c4b70810854bf"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.658761 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" event={"ID":"52652793-dcbd-4568-a32e-b727818c61a8","Type":"ContainerStarted","Data":"859f29a12886d96f6a5ad7a2e2b0d3affaf1e1e7dc80936a6ac1f321e4160902"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.669921 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" event={"ID":"c1a7c228-ae7f-44c7-98df-78bc3949c528","Type":"ContainerStarted","Data":"3d53cdec71d4e98b1fcd5b22da98aa602f9db421f18576ea91c5b9909c09ca0b"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.678050 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" podStartSLOduration=121.678023556 podStartE2EDuration="2m1.678023556s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:00.655148234 +0000 UTC m=+146.307503746" watchObservedRunningTime="2026-01-27 07:54:00.678023556 +0000 UTC m=+146.330379048" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.695953 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.697575 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.197511811 +0000 UTC m=+146.849867373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.726446 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ctv89" event={"ID":"af87df13-9164-4e5e-99d4-5cbd9dfe80a5","Type":"ContainerStarted","Data":"535dd7c269fc6359bf8e3595d1b63afe2342457c4c43110041049e99fec62a40"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.727234 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.738010 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-ctv89 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.738237 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ctv89" podUID="af87df13-9164-4e5e-99d4-5cbd9dfe80a5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.763866 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d9n62" podStartSLOduration=121.763839639 podStartE2EDuration="2m1.763839639s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:00.68109813 +0000 UTC m=+146.333453622" watchObservedRunningTime="2026-01-27 07:54:00.763839639 +0000 UTC m=+146.416195131" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.778361 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" event={"ID":"a1672707-9776-4274-9bdd-4f1dabf83038","Type":"ContainerStarted","Data":"1f79ff8a49a9e9948d2591fcf86582b17234487ec76130430371f8fb8eeb10ff"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.785887 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" event={"ID":"7c9507a4-925b-418e-b824-f338cd69a66e","Type":"ContainerStarted","Data":"5f8513655a98fd8edfda812d7cb333015d5fac538cc236c7d0655eba4136e1b0"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.792741 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" event={"ID":"f72f30c1-aeab-47d1-b353-5ea33af8eb6b","Type":"ContainerStarted","Data":"c39bdf5bbcb945f4a4464c0c5e574578ec8fec6904d0e3147e0c0840df9c02b1"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.794431 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ctv89" podStartSLOduration=121.794414057 podStartE2EDuration="2m1.794414057s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:00.762821221 +0000 UTC m=+146.415176723" watchObservedRunningTime="2026-01-27 07:54:00.794414057 +0000 UTC m=+146.446769549" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.795664 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" podStartSLOduration=121.795657263 podStartE2EDuration="2m1.795657263s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:00.794579683 +0000 UTC m=+146.446935185" watchObservedRunningTime="2026-01-27 07:54:00.795657263 +0000 UTC m=+146.448012755" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.800034 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.810441 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.310401981 +0000 UTC m=+146.962757463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.849539 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" event={"ID":"287286a8-80b2-4c95-948d-a096153d8e51","Type":"ContainerStarted","Data":"a52b1df0652ff9b38cd15dbf0bd00fd8fd9e3424531fd60b0f809e68cfd4499a"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.880331 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" event={"ID":"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5","Type":"ContainerStarted","Data":"31367b02d14f1a4db40e7164f0b2be9cfe37b9fc8e6babc123bc1322f1a52842"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.916037 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:00 crc kubenswrapper[4787]: E0127 07:54:00.917423 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.417404584 +0000 UTC m=+147.069760076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.923988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jzdds" event={"ID":"5f4a2300-d512-490d-a876-6ad03f0c2f31","Type":"ContainerStarted","Data":"f177590393476ec974ed7a7d8600fdcd3567bffcaa8920db4a6320af2a86a07e"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.925975 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" podStartSLOduration=121.925958762 podStartE2EDuration="2m1.925958762s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:00.924411744 +0000 UTC m=+146.576767236" watchObservedRunningTime="2026-01-27 07:54:00.925958762 +0000 UTC m=+146.578314254" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.927469 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfh8x" podStartSLOduration=122.927459428 podStartE2EDuration="2m2.927459428s" podCreationTimestamp="2026-01-27 07:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:00.879852096 +0000 UTC m=+146.532207588" watchObservedRunningTime="2026-01-27 07:54:00.927459428 +0000 UTC m=+146.579814920" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.936046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-znxn2" event={"ID":"9ce2a24d-e7e3-4c93-a46a-e914e248d6ae","Type":"ContainerStarted","Data":"a48b707d16db1e654abc9e879479b6ada09c2e0a4a4fe8b3fef680c656651ce9"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.948345 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" event={"ID":"cd10b89a-565a-4ae6-b7dd-4bac183ece8c","Type":"ContainerStarted","Data":"9327b8bbc5ad1982bdfd7b430fa46b49d3679836617e29dcb3e639a47d21ece5"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.966233 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jzdds" podStartSLOduration=121.966197638 podStartE2EDuration="2m1.966197638s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:00.958370378 +0000 UTC m=+146.610725870" watchObservedRunningTime="2026-01-27 07:54:00.966197638 +0000 UTC m=+146.618553140" Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.985465 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" event={"ID":"8d0e8826-188e-474f-9d5b-848886f95d1d","Type":"ContainerStarted","Data":"22fdd5365b93b4f5204d87d1a4df2fd4735d1486142fdb0ee41ec9aa76c418be"} Jan 27 07:54:00 crc kubenswrapper[4787]: I0127 07:54:00.985886 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-znxn2" podStartSLOduration=6.98585293 podStartE2EDuration="6.98585293s" podCreationTimestamp="2026-01-27 07:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:00.985217407 +0000 UTC m=+146.637572909" watchObservedRunningTime="2026-01-27 07:54:00.98585293 +0000 UTC m=+146.638208422" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.001812 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" event={"ID":"ad35acfc-9b56-41e7-87bd-b43f5c006dd5","Type":"ContainerStarted","Data":"cd69ef4b0cf298876502ab6c4faf5bb19032d85098576b40e80ece77086b0042"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.015833 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" event={"ID":"66eeb178-2611-4361-962c-a399b1f243b0","Type":"ContainerStarted","Data":"d8dbb1a88704c4745c58d07ffc23c2222317fe5d4738aa3a87fd2d0f87d7b995"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.018625 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.019376 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.022784 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.522766214 +0000 UTC m=+147.175121706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.025339 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j6mnr" podStartSLOduration=122.025302719 podStartE2EDuration="2m2.025302719s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.016398937 +0000 UTC m=+146.668754429" watchObservedRunningTime="2026-01-27 07:54:01.025302719 +0000 UTC m=+146.677658211" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.030957 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cgsgg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.031028 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" podUID="66eeb178-2611-4361-962c-a399b1f243b0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.045780 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" event={"ID":"4c622447-70d1-4b27-b09c-6fa6402f632c","Type":"ContainerStarted","Data":"c23a04a6732dc9cb95ffef73966fcc9a98e38e2ce2c0eb6cd3ab353422b67748"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.046477 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.084101 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" podStartSLOduration=122.084068346 podStartE2EDuration="2m2.084068346s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.069880638 +0000 UTC m=+146.722236130" watchObservedRunningTime="2026-01-27 07:54:01.084068346 +0000 UTC m=+146.736423838" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.102286 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" podStartSLOduration=122.102256812 podStartE2EDuration="2m2.102256812s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.095992379 +0000 UTC m=+146.748347881" watchObservedRunningTime="2026-01-27 07:54:01.102256812 +0000 UTC m=+146.754612304" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.104932 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9xlcr" event={"ID":"0260c856-7cbc-4fdd-887e-5b2403c05e04","Type":"ContainerStarted","Data":"d1e12cb428e4881ce482a6b3479d196778864959f4a6a6c9df83d10d4a90cbf0"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.106045 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" event={"ID":"3b9018b2-8e63-4c80-8c71-cc9fa7ddb853","Type":"ContainerStarted","Data":"31bee4a2b62aeaca94b08e5898d5f668df4b7ec70d2def20eeb97ebd083f3ec3"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.122032 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.123318 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkzlx" event={"ID":"8aecac31-d20f-4725-b563-b66c9288769d","Type":"ContainerStarted","Data":"fcee71dd80f35d712ce58fb88f374f8864ba0291f70a83880cf45673e0387f68"} Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.124782 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.624690727 +0000 UTC m=+147.277046229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.127775 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" event={"ID":"f063d15e-fe05-46d3-8304-72da67594ea6","Type":"ContainerStarted","Data":"5a4e6084c8174d166cda259d0d40f41eecd29f190246bde8d4d400592447021d"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.130949 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9xlcr" podStartSLOduration=7.130921599 podStartE2EDuration="7.130921599s" podCreationTimestamp="2026-01-27 07:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.13013314 +0000 UTC m=+146.782488632" watchObservedRunningTime="2026-01-27 07:54:01.130921599 +0000 UTC m=+146.783277091" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.162534 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgw8m" podStartSLOduration=122.162504414 podStartE2EDuration="2m2.162504414s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.159083236 +0000 UTC m=+146.811438728" watchObservedRunningTime="2026-01-27 07:54:01.162504414 +0000 UTC m=+146.814859916" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.182010 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" event={"ID":"ca4c081d-896c-4cc8-9656-59364376de35","Type":"ContainerStarted","Data":"18d9ef1e7d3e588a5d939868f7c0a651f90760ac04dbbd15cc83a8ed490076a9"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.183445 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.209507 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-mnzjr" podStartSLOduration=122.209466881 podStartE2EDuration="2m2.209466881s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.19463122 +0000 UTC m=+146.846986722" watchObservedRunningTime="2026-01-27 07:54:01.209466881 +0000 UTC m=+146.861822573" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.210173 4787 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g44f6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.210358 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" podUID="ca4c081d-896c-4cc8-9656-59364376de35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.224134 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.226491 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.726473674 +0000 UTC m=+147.378829156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.227517 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" event={"ID":"152af819-6586-4345-b2ef-cb8ad845a6b1","Type":"ContainerStarted","Data":"4bba20fc10f4125f9489dd4dcdc6c569bd57f97297dffe6c9dba0e5f759c10ea"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.234016 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" podStartSLOduration=122.233988014 podStartE2EDuration="2m2.233988014s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.232190967 +0000 UTC m=+146.884546459" watchObservedRunningTime="2026-01-27 07:54:01.233988014 +0000 UTC m=+146.886343516" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.255152 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" event={"ID":"7307207b-fd57-4efc-807b-7e0664873f73","Type":"ContainerStarted","Data":"f0e097bbddf99a5cd749191fafe89c657d9a183c594c798eb00ebbd8cd926a52"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.259989 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" podStartSLOduration=122.259970771 podStartE2EDuration="2m2.259970771s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.258249857 +0000 UTC m=+146.910605369" watchObservedRunningTime="2026-01-27 07:54:01.259970771 +0000 UTC m=+146.912326263" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.279739 4787 generic.go:334] "Generic (PLEG): container finished" podID="ba0745e4-7610-44e2-9a65-cd3875393d64" containerID="31ba9cc692eb702ff1a9bfd4d9a7e97905de373f690545545a78b36468be4e0f" exitCode=0 Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.279859 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" event={"ID":"ba0745e4-7610-44e2-9a65-cd3875393d64","Type":"ContainerDied","Data":"31ba9cc692eb702ff1a9bfd4d9a7e97905de373f690545545a78b36468be4e0f"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.311389 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" event={"ID":"320d8560-54f7-4505-aa98-5beb6c45505e","Type":"ContainerStarted","Data":"45f361e64b8125955ab791eeacbf8e8dc3d2aa01e1f798b908cbd3a73c38ec83"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.312179 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.319746 4787 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vm6dc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.320082 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" podUID="320d8560-54f7-4505-aa98-5beb6c45505e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.322788 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" event={"ID":"0a3473c0-d321-4be8-860f-bd51210cb58f","Type":"ContainerStarted","Data":"07e49929be2877ba22a7f7942abf3e5f479a630fbb91b44620c30be0145e9051"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.326909 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.328692 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.828671367 +0000 UTC m=+147.481026859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.376368 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" podStartSLOduration=122.376349891 podStartE2EDuration="2m2.376349891s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.361303842 +0000 UTC m=+147.013659344" watchObservedRunningTime="2026-01-27 07:54:01.376349891 +0000 UTC m=+147.028705383" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.383070 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" event={"ID":"be571415-6ab0-4b8f-a971-3225465af110","Type":"ContainerStarted","Data":"9b0831767f55750a9e2555383b80711ffdb05dab1ed7dbc48f8a84a58f3ad26e"} Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.385867 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xwx4w" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.400457 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p29lm" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.400758 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.428737 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-xwx4w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.428819 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xwx4w" podUID="301e3f1a-19c5-47a7-b85d-81676098f971" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.428842 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.430317 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:01.930299199 +0000 UTC m=+147.582654681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.531270 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.535009 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.034950604 +0000 UTC m=+147.687306096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.583772 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ln7rp" podStartSLOduration=122.58374808 podStartE2EDuration="2m2.58374808s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.429660945 +0000 UTC m=+147.082016457" watchObservedRunningTime="2026-01-27 07:54:01.58374808 +0000 UTC m=+147.236103572" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.635353 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" podStartSLOduration=122.635318998 podStartE2EDuration="2m2.635318998s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:01.634342602 +0000 UTC m=+147.286698104" watchObservedRunningTime="2026-01-27 07:54:01.635318998 +0000 UTC m=+147.287674510" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.635390 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.635730 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.135716904 +0000 UTC m=+147.788072396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.712819 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.742143 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.742513 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.242495077 +0000 UTC m=+147.894850569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.742764 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:01 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:01 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:01 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.742802 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.846853 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.847290 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.347274416 +0000 UTC m=+147.999629908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:01 crc kubenswrapper[4787]: I0127 07:54:01.959588 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:01 crc kubenswrapper[4787]: E0127 07:54:01.960423 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.460400126 +0000 UTC m=+148.112755608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.061928 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.062419 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.56239163 +0000 UTC m=+148.214747302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.163132 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.163376 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.663334766 +0000 UTC m=+148.315690268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.163448 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.163931 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.663910889 +0000 UTC m=+148.316266541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.265623 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.265860 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.765819591 +0000 UTC m=+148.418175083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.266058 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.267081 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.767071498 +0000 UTC m=+148.419426990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.276769 4787 csr.go:261] certificate signing request csr-7nwqm is approved, waiting to be issued Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.283069 4787 csr.go:257] certificate signing request csr-7nwqm is issued Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.367757 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.367987 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.867937921 +0000 UTC m=+148.520293403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.368316 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.368879 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.868847524 +0000 UTC m=+148.521203016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.392825 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkzlx" event={"ID":"8aecac31-d20f-4725-b563-b66c9288769d","Type":"ContainerStarted","Data":"6a6d83ddf1a9d8424c8026068dfc1e75d7527f2e2089ebb81cda023641b16d16"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.392902 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkzlx" event={"ID":"8aecac31-d20f-4725-b563-b66c9288769d","Type":"ContainerStarted","Data":"ab402b294004f3bec36b053aea58449461b62a4671b44090382136dfa2271fdb"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.393028 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rkzlx" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.394601 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-crvsg" event={"ID":"0c6406d2-db7c-458f-9c56-a640eee1f327","Type":"ContainerStarted","Data":"d7cb8e77e8378971a0bca8625d45c58d9ded641d783442f8f73e238b6ec2e2cf"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.396824 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" event={"ID":"ad35acfc-9b56-41e7-87bd-b43f5c006dd5","Type":"ContainerStarted","Data":"32a67935827d89a4fe33cdc18c8aea74d2ebc04eb4a16cf95e3aba2ecb240dcf"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.398404 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" event={"ID":"7c9507a4-925b-418e-b824-f338cd69a66e","Type":"ContainerStarted","Data":"eb5bfc714559f49fe82f0eb2b8b18ce764d02898a035bcea361fbd63c963070a"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.400132 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" event={"ID":"f72f30c1-aeab-47d1-b353-5ea33af8eb6b","Type":"ContainerStarted","Data":"d2024e5365119c1c772a3ed14c0f1b5d27c4be2cf481dc22f626eb9880a910f9"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.402284 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" event={"ID":"4c622447-70d1-4b27-b09c-6fa6402f632c","Type":"ContainerStarted","Data":"ca9f99093e71b2e75799d6a27256efbb3ca8f03bde03a626d83023f28c42337b"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.404253 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" event={"ID":"c1a7c228-ae7f-44c7-98df-78bc3949c528","Type":"ContainerStarted","Data":"41f28742f7a955e738de3212cf60da12058bfff516102231cc9b487be3c97589"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.421975 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" event={"ID":"7307207b-fd57-4efc-807b-7e0664873f73","Type":"ContainerStarted","Data":"139405013ba5fbe4e056c0418bee55f402c7f745c43bba09a23828946c1df1c7"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.422038 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" event={"ID":"7307207b-fd57-4efc-807b-7e0664873f73","Type":"ContainerStarted","Data":"07db64620c5044eefec1cbffd8ab58e833d6c27fcef66ba74a1688f0d59a79bf"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.430118 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" event={"ID":"770c6bc0-a6c4-42b9-b21e-d1c36ed7b8bf","Type":"ContainerStarted","Data":"d89a1a358ac2147616620dc9bdcb5050c6771cdd9a3460252094290a0085ef2b"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.453212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" event={"ID":"ba0745e4-7610-44e2-9a65-cd3875393d64","Type":"ContainerStarted","Data":"9a9f2ac8b63c2c8caea215369cf38b58b23111f7fdcb958e05ae967809fb4ed6"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.473365 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.475190 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:02.975166141 +0000 UTC m=+148.627521633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.473502 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" event={"ID":"66eeb178-2611-4361-962c-a399b1f243b0","Type":"ContainerStarted","Data":"916051aa46a52f6b2746a78401d84cfc2319af4bc4fd998eaaff2e7cfd697157"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.496912 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgsgg" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.525718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4cr4" event={"ID":"be571415-6ab0-4b8f-a971-3225465af110","Type":"ContainerStarted","Data":"a2252d9642541c57f78b6477d9422d38fe03c9bffb7c7af7da4c29faf6ced816"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.546305 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" event={"ID":"c7ed2e1c-c815-45e4-a5a1-e9006de2e8b3","Type":"ContainerStarted","Data":"193e26ff5142e27d793d997c1307ca5e20c75317341f426b60970d97236d7131"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.548982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" event={"ID":"320d8560-54f7-4505-aa98-5beb6c45505e","Type":"ContainerStarted","Data":"005924651a6cb53de22088169d28a2182550178445f46faf130801e0657a7561"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.555683 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" event={"ID":"cd10b89a-565a-4ae6-b7dd-4bac183ece8c","Type":"ContainerStarted","Data":"d88e5055131efaed5f914ca26f1117658cd8a05cbea4828aa1b8f40ed095fd20"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.555835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" event={"ID":"cd10b89a-565a-4ae6-b7dd-4bac183ece8c","Type":"ContainerStarted","Data":"7c4a012248b8e756d9e2e998b4271a6b0121bb1a9060285dbcd916824736d91f"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.556913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" event={"ID":"7727572b-c246-4b22-b3eb-14275d3d25ee","Type":"ContainerStarted","Data":"e0605f364a944b73985438110f75c81b37eb4b5d1b3c1b80020368cd9a3e179a"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.564864 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vm6dc" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.566361 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cqdk9" event={"ID":"2572ea5c-d57f-4bbc-b7a8-a72fcd4cced5","Type":"ContainerStarted","Data":"def445cc1484f1ddcb4c2f6818d89a9fb3a97ebbbffef677652806f57ae5bc89"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.570164 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" event={"ID":"082e51f2-7ac3-4111-a40d-eb8498db9153","Type":"ContainerStarted","Data":"38dfde05ce1a21d787aff575d63ca9743b7df7c4aef8dd76f2731118e18ecb9b"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.575550 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.576476 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.576910 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.076890246 +0000 UTC m=+148.729245908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.580316 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-xwx4w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.580371 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xwx4w" podUID="301e3f1a-19c5-47a7-b85d-81676098f971" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.580753 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-znxn2" event={"ID":"9ce2a24d-e7e3-4c93-a46a-e914e248d6ae","Type":"ContainerStarted","Data":"99cddda84a9c7b6f27f72416db2d8539fb7928211fca37e7debd56afa5e1c9f7"} Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.596115 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.600299 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ctv89" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.605904 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hn2j2" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.631938 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.677602 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.677821 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.177785791 +0000 UTC m=+148.830141283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.680528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.683239 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.183217053 +0000 UTC m=+148.835572725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.694496 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8b5jc" podStartSLOduration=123.694472042 podStartE2EDuration="2m3.694472042s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:02.691840164 +0000 UTC m=+148.344195656" watchObservedRunningTime="2026-01-27 07:54:02.694472042 +0000 UTC m=+148.346827534" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.695733 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rkzlx" podStartSLOduration=8.695723719 podStartE2EDuration="8.695723719s" podCreationTimestamp="2026-01-27 07:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:02.493054226 +0000 UTC m=+148.145409718" watchObservedRunningTime="2026-01-27 07:54:02.695723719 +0000 UTC m=+148.348079211" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.723703 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:02 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:02 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:02 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.725035 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.796520 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.796799 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.296758018 +0000 UTC m=+148.949113510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.797754 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.798267 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.298247224 +0000 UTC m=+148.950602716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.830356 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q44w4" podStartSLOduration=124.830325917 podStartE2EDuration="2m4.830325917s" podCreationTimestamp="2026-01-27 07:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:02.830191433 +0000 UTC m=+148.482546935" watchObservedRunningTime="2026-01-27 07:54:02.830325917 +0000 UTC m=+148.482681409" Jan 27 07:54:02 crc kubenswrapper[4787]: I0127 07:54:02.901494 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:02 crc kubenswrapper[4787]: E0127 07:54:02.901918 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.401894041 +0000 UTC m=+149.054249533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.004332 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.004382 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.004405 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.004431 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.004493 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.006688 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.50666591 +0000 UTC m=+149.159021402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.008377 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.014535 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.016330 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.029116 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.097064 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.105775 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.106311 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.606285746 +0000 UTC m=+149.258641238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.108721 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.116250 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.127210 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t6xzm" podStartSLOduration=124.127179984 podStartE2EDuration="2m4.127179984s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:02.947954365 +0000 UTC m=+148.600309857" watchObservedRunningTime="2026-01-27 07:54:03.127179984 +0000 UTC m=+148.779535476" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.129025 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wv4z7" podStartSLOduration=124.129015552 podStartE2EDuration="2m4.129015552s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:03.12304035 +0000 UTC m=+148.775395842" watchObservedRunningTime="2026-01-27 07:54:03.129015552 +0000 UTC m=+148.781371044" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.193631 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tx6fd" podStartSLOduration=124.193608856 podStartE2EDuration="2m4.193608856s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:03.192423931 +0000 UTC m=+148.844779443" watchObservedRunningTime="2026-01-27 07:54:03.193608856 +0000 UTC m=+148.845964348" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.207147 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.207526 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.707513273 +0000 UTC m=+149.359868765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.300776 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 07:49:02 +0000 UTC, rotation deadline is 2026-11-11 08:42:07.78994776 +0000 UTC Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.300869 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6912h48m4.489081643s for next certificate rotation Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.308738 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.309662 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.809634203 +0000 UTC m=+149.461989695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.411242 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.411647 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:03.911633129 +0000 UTC m=+149.563988621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.488794 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xf9dv" podStartSLOduration=124.488770049 podStartE2EDuration="2m4.488770049s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:03.486198574 +0000 UTC m=+149.138554066" watchObservedRunningTime="2026-01-27 07:54:03.488770049 +0000 UTC m=+149.141125531" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.521055 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.521205 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.021174955 +0000 UTC m=+149.673530447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.521517 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.522127 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.022090019 +0000 UTC m=+149.674445511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.574136 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7czl" podStartSLOduration=124.574113925 podStartE2EDuration="2m4.574113925s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:03.57207547 +0000 UTC m=+149.224430972" watchObservedRunningTime="2026-01-27 07:54:03.574113925 +0000 UTC m=+149.226469417" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.576206 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kqzdm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.576311 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" podUID="082e51f2-7ac3-4111-a40d-eb8498db9153" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.622488 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.622833 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.122812848 +0000 UTC m=+149.775168340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.624838 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" event={"ID":"f72f30c1-aeab-47d1-b353-5ea33af8eb6b","Type":"ContainerStarted","Data":"8147fedbecc33c736afd3c6aa7d07546d68acce0f091c718e518023f8d45d6f7"} Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.630881 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" event={"ID":"ba0745e4-7610-44e2-9a65-cd3875393d64","Type":"ContainerStarted","Data":"c9f35cb6d57784ba7f0529a72cd25a8350d6c643eadaffd067ef0a9e8ee75698"} Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.650699 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vl4tj" podStartSLOduration=124.650662784 podStartE2EDuration="2m4.650662784s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:03.633398601 +0000 UTC m=+149.285754093" watchObservedRunningTime="2026-01-27 07:54:03.650662784 +0000 UTC m=+149.303018276" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.694766 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" podStartSLOduration=124.694747524 podStartE2EDuration="2m4.694747524s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:03.693102883 +0000 UTC m=+149.345458375" watchObservedRunningTime="2026-01-27 07:54:03.694747524 +0000 UTC m=+149.347103016" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.727718 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.740522 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:03 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:03 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:03 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.741591 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.746399 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.246379566 +0000 UTC m=+149.898735058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.834910 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.835456 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.33543384 +0000 UTC m=+149.987789332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:03 crc kubenswrapper[4787]: I0127 07:54:03.937714 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:03 crc kubenswrapper[4787]: E0127 07:54:03.938193 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.438174383 +0000 UTC m=+150.090529875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.042355 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.043127 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.543110698 +0000 UTC m=+150.195466190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.144592 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.145030 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.64501185 +0000 UTC m=+150.297367342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.246425 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.246866 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.74684627 +0000 UTC m=+150.399201762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.300610 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" podStartSLOduration=126.300584119 podStartE2EDuration="2m6.300584119s" podCreationTimestamp="2026-01-27 07:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:03.920049958 +0000 UTC m=+149.572405480" watchObservedRunningTime="2026-01-27 07:54:04.300584119 +0000 UTC m=+149.952939611" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.349941 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.350671 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.850655292 +0000 UTC m=+150.503010784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.389861 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-npc57"] Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.392483 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.409601 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.424831 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npc57"] Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.451625 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.452165 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:04.952147159 +0000 UTC m=+150.604502651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.557621 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.557713 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-utilities\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.557750 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz5bs\" (UniqueName: \"kubernetes.io/projected/7af45749-cd7f-490a-b2f6-bf979ea6467e-kube-api-access-wz5bs\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.557776 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-catalog-content\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.558166 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.058151494 +0000 UTC m=+150.710506986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.578460 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tnxzl"] Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.580099 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.586011 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.605280 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tnxzl"] Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.631084 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kqzdm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.631184 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" podUID="082e51f2-7ac3-4111-a40d-eb8498db9153" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.661456 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.661568 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.161536641 +0000 UTC m=+150.813892133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.661813 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-utilities\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.661869 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-utilities\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.661896 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz5bs\" (UniqueName: \"kubernetes.io/projected/7af45749-cd7f-490a-b2f6-bf979ea6467e-kube-api-access-wz5bs\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.661915 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-catalog-content\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.661945 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmpp\" (UniqueName: \"kubernetes.io/projected/44cc5af2-0636-4589-a988-e7e32bfea075-kube-api-access-5tmpp\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.661995 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.662018 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-catalog-content\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.662445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-utilities\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.662825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-catalog-content\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.663057 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.163033587 +0000 UTC m=+150.815389079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.674530 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"55076bce06be9fa0dc9f6c9adfcefb7226f9ea09b1332b9b63626a68a828e31a"} Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.678650 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3cd49041d9face90fd6917b64fbf9e045896d6a0cf124e89038197ea750b3b90"} Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.679591 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"525470d42bb9b8a1c30570a7364c14ddadd46a66f4651607342d646e8d070258"} Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.696877 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" event={"ID":"f72f30c1-aeab-47d1-b353-5ea33af8eb6b","Type":"ContainerStarted","Data":"5142e1fc4ef05bc43569f915f30f7c73ec539c6e0ce5f921c8b907db94fca10d"} Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.708708 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz5bs\" (UniqueName: \"kubernetes.io/projected/7af45749-cd7f-490a-b2f6-bf979ea6467e-kube-api-access-wz5bs\") pod \"certified-operators-npc57\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.731869 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:04 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:04 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:04 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.731929 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.735262 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-672cg"] Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.736475 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.763366 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.763657 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-catalog-content\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.763718 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-utilities\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.763918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tmpp\" (UniqueName: \"kubernetes.io/projected/44cc5af2-0636-4589-a988-e7e32bfea075-kube-api-access-5tmpp\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.765010 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.2649588 +0000 UTC m=+150.917314462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.766088 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-catalog-content\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.766305 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.766767 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-utilities\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.772610 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqzdm" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.823569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tmpp\" (UniqueName: \"kubernetes.io/projected/44cc5af2-0636-4589-a988-e7e32bfea075-kube-api-access-5tmpp\") pod \"community-operators-tnxzl\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.832084 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-672cg"] Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.865216 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrrx\" (UniqueName: \"kubernetes.io/projected/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-kube-api-access-bqrrx\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.865303 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-utilities\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.865326 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-catalog-content\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.865362 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.865785 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.365766191 +0000 UTC m=+151.018121693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.918382 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.928202 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgwgp"] Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.929513 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.967107 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.967460 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrrx\" (UniqueName: \"kubernetes.io/projected/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-kube-api-access-bqrrx\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.967603 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.467569759 +0000 UTC m=+151.119925251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.967717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-utilities\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.967764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-catalog-content\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.967820 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:04 crc kubenswrapper[4787]: E0127 07:54:04.968420 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.46840974 +0000 UTC m=+151.120765232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.968428 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-utilities\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.968941 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-catalog-content\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:04 crc kubenswrapper[4787]: I0127 07:54:04.977670 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgwgp"] Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.042318 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrrx\" (UniqueName: \"kubernetes.io/projected/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-kube-api-access-bqrrx\") pod \"certified-operators-672cg\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.070033 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.071045 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.071398 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5r7l\" (UniqueName: \"kubernetes.io/projected/0368ea79-94a8-42e3-8986-dddbec83d755-kube-api-access-z5r7l\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.071425 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-catalog-content\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.071490 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-utilities\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.071621 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.57160102 +0000 UTC m=+151.223956512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.176097 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-utilities\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.176168 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5r7l\" (UniqueName: \"kubernetes.io/projected/0368ea79-94a8-42e3-8986-dddbec83d755-kube-api-access-z5r7l\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.176198 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-catalog-content\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.176238 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.176581 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.676546136 +0000 UTC m=+151.328901628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.177495 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-utilities\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.178271 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-catalog-content\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.260326 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5r7l\" (UniqueName: \"kubernetes.io/projected/0368ea79-94a8-42e3-8986-dddbec83d755-kube-api-access-z5r7l\") pod \"community-operators-cgwgp\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.279491 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.280104 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.780082987 +0000 UTC m=+151.432438479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.379242 4787 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.382573 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.382988 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.882974567 +0000 UTC m=+151.535330059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.483412 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.483906 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:05.983874091 +0000 UTC m=+151.636229583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.528243 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npc57"] Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.548264 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.588603 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.589260 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:06.089244552 +0000 UTC m=+151.741600034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.696955 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.697455 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:06.197429859 +0000 UTC m=+151.849785351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.699222 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-672cg"] Jan 27 07:54:05 crc kubenswrapper[4787]: W0127 07:54:05.715879 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1ecf2a_6b91_48d7_873e_918dd4e045fc.slice/crio-b8aaeac82942157bbd1bdc8b6aa3df78bb8abe8a0a411a20aacb7836a96b1339 WatchSource:0}: Error finding container b8aaeac82942157bbd1bdc8b6aa3df78bb8abe8a0a411a20aacb7836a96b1339: Status 404 returned error can't find the container with id b8aaeac82942157bbd1bdc8b6aa3df78bb8abe8a0a411a20aacb7836a96b1339 Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.725902 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:05 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:05 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:05 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.725946 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.739891 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" event={"ID":"f72f30c1-aeab-47d1-b353-5ea33af8eb6b","Type":"ContainerStarted","Data":"b1ca9995e2add56c2a17127ab2ad642f5a17be1d9e3c4f15806046289c91bc8a"} Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.758708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7b945307886250632e5bd4040693345426f7bc80be3677f88f8e89e9dc77dc7a"} Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.773424 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bd1a40d31617672448a6f5392eac59ecf9416a45d350f7dacd468d69e900a076"} Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.774308 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.776482 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tnxzl"] Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.788898 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"83d226bf4cbb8abdbbfb7a8b939bbf8d690037bd9cfe8bce754784e1de4650ec"} Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.798638 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.799993 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:06.299967264 +0000 UTC m=+151.952322756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.801722 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mwnxd" podStartSLOduration=11.801701879 podStartE2EDuration="11.801701879s" podCreationTimestamp="2026-01-27 07:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:05.800343348 +0000 UTC m=+151.452698860" watchObservedRunningTime="2026-01-27 07:54:05.801701879 +0000 UTC m=+151.454057371" Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.815092 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npc57" event={"ID":"7af45749-cd7f-490a-b2f6-bf979ea6467e","Type":"ContainerStarted","Data":"d4eb89ad2e8044f8b7ec7f7b1e0a8d0bd6b479506668e725c471bfff9235d2ce"} Jan 27 07:54:05 crc kubenswrapper[4787]: I0127 07:54:05.901740 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:05 crc kubenswrapper[4787]: E0127 07:54:05.904419 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:06.404384959 +0000 UTC m=+152.056740591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.003374 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:06 crc kubenswrapper[4787]: E0127 07:54:06.004311 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:06.504293358 +0000 UTC m=+152.156648850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.116615 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:06 crc kubenswrapper[4787]: E0127 07:54:06.117706 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:54:06.617645166 +0000 UTC m=+152.270000658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.169031 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgwgp"] Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.221638 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:06 crc kubenswrapper[4787]: E0127 07:54:06.222164 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:54:06.722148874 +0000 UTC m=+152.374504366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d44bj" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:54:06 crc kubenswrapper[4787]: E0127 07:54:06.225272 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1ecf2a_6b91_48d7_873e_918dd4e045fc.slice/crio-conmon-01d251eebd7c2252e3ae9b7ab01cffa2fc304155551715934787979ebd43ab3e.scope\": RecentStats: unable to find data in memory cache]" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.262684 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.263504 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.278946 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.279061 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.279139 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.301761 4787 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T07:54:05.379276149Z","Handler":null,"Name":""} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.311570 4787 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.311625 4787 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.327334 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.327572 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.327634 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.336758 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.429302 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.429890 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.429935 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.429714 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.434515 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.434593 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.476468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.523768 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ht6h9"] Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.524965 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.527758 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.550511 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht6h9"] Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.590505 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d44bj\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.632819 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-utilities\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.632897 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvcd\" (UniqueName: \"kubernetes.io/projected/624e9a13-c9c5-4ef3-8628-056bfc65338b-kube-api-access-nkvcd\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.633285 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-catalog-content\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.665717 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.714178 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:06 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:06 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:06 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.714287 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.735179 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-catalog-content\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.735287 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-utilities\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.735329 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvcd\" (UniqueName: \"kubernetes.io/projected/624e9a13-c9c5-4ef3-8628-056bfc65338b-kube-api-access-nkvcd\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.736839 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-catalog-content\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.737086 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-utilities\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.742165 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.742264 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.749419 4787 patch_prober.go:28] interesting pod/console-f9d7485db-qptnb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.749483 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qptnb" podUID="b070600e-8a6f-4bb9-a1c2-e763f55d90eb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.761526 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvcd\" (UniqueName: \"kubernetes.io/projected/624e9a13-c9c5-4ef3-8628-056bfc65338b-kube-api-access-nkvcd\") pod \"redhat-marketplace-ht6h9\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.801055 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-xwx4w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.801126 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xwx4w" podUID="301e3f1a-19c5-47a7-b85d-81676098f971" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.801167 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-xwx4w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.801248 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xwx4w" podUID="301e3f1a-19c5-47a7-b85d-81676098f971" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.825168 4787 generic.go:334] "Generic (PLEG): container finished" podID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerID="ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733" exitCode=0 Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.826172 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npc57" event={"ID":"7af45749-cd7f-490a-b2f6-bf979ea6467e","Type":"ContainerDied","Data":"ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733"} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.828672 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.831926 4787 generic.go:334] "Generic (PLEG): container finished" podID="0368ea79-94a8-42e3-8986-dddbec83d755" containerID="da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d" exitCode=0 Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.832011 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgwgp" event={"ID":"0368ea79-94a8-42e3-8986-dddbec83d755","Type":"ContainerDied","Data":"da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d"} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.832060 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgwgp" event={"ID":"0368ea79-94a8-42e3-8986-dddbec83d755","Type":"ContainerStarted","Data":"7a180c063d8433886b9b99fe8382a8d973c8b6b4d1425416261139533b679d98"} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.835668 4787 generic.go:334] "Generic (PLEG): container finished" podID="152af819-6586-4345-b2ef-cb8ad845a6b1" containerID="4bba20fc10f4125f9489dd4dcdc6c569bd57f97297dffe6c9dba0e5f759c10ea" exitCode=0 Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.835763 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" event={"ID":"152af819-6586-4345-b2ef-cb8ad845a6b1","Type":"ContainerDied","Data":"4bba20fc10f4125f9489dd4dcdc6c569bd57f97297dffe6c9dba0e5f759c10ea"} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.838389 4787 generic.go:334] "Generic (PLEG): container finished" podID="44cc5af2-0636-4589-a988-e7e32bfea075" containerID="5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895" exitCode=0 Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.838502 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxzl" event={"ID":"44cc5af2-0636-4589-a988-e7e32bfea075","Type":"ContainerDied","Data":"5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895"} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.838526 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxzl" event={"ID":"44cc5af2-0636-4589-a988-e7e32bfea075","Type":"ContainerStarted","Data":"81b4d3c6ee8e93f24d5baa8dea52cfb0d85fded6205579eedde915fa9ac03cb2"} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.843823 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.847581 4787 generic.go:334] "Generic (PLEG): container finished" podID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerID="01d251eebd7c2252e3ae9b7ab01cffa2fc304155551715934787979ebd43ab3e" exitCode=0 Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.847763 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-672cg" event={"ID":"2a1ecf2a-6b91-48d7-873e-918dd4e045fc","Type":"ContainerDied","Data":"01d251eebd7c2252e3ae9b7ab01cffa2fc304155551715934787979ebd43ab3e"} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.847796 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-672cg" event={"ID":"2a1ecf2a-6b91-48d7-873e-918dd4e045fc","Type":"ContainerStarted","Data":"b8aaeac82942157bbd1bdc8b6aa3df78bb8abe8a0a411a20aacb7836a96b1339"} Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.856411 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.897496 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.917988 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9l2"] Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.920391 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.934659 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9l2"] Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.968102 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.968134 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.995393 4787 patch_prober.go:28] interesting pod/apiserver-76f77b778f-zkx8b container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]log ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]etcd ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/max-in-flight-filter ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 07:54:06 crc kubenswrapper[4787]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 07:54:06 crc kubenswrapper[4787]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-startinformers ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 07:54:06 crc kubenswrapper[4787]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 07:54:06 crc kubenswrapper[4787]: livez check failed Jan 27 07:54:06 crc kubenswrapper[4787]: I0127 07:54:06.995633 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" podUID="ba0745e4-7610-44e2-9a65-cd3875393d64" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.043298 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-utilities\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.043925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qxh\" (UniqueName: \"kubernetes.io/projected/264daa3b-f5d9-407d-8835-35845525329a-kube-api-access-65qxh\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.043982 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-catalog-content\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.097185 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.110873 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht6h9"] Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.146433 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-utilities\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.146536 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qxh\" (UniqueName: \"kubernetes.io/projected/264daa3b-f5d9-407d-8835-35845525329a-kube-api-access-65qxh\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.146578 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-catalog-content\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.147135 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-catalog-content\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.147406 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-utilities\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.168488 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d44bj"] Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.174214 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qxh\" (UniqueName: \"kubernetes.io/projected/264daa3b-f5d9-407d-8835-35845525329a-kube-api-access-65qxh\") pod \"redhat-marketplace-hz9l2\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.273907 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.507886 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kkjnl"] Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.511923 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.517366 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.535411 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkjnl"] Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.647048 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9l2"] Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.653467 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-utilities\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.653881 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lt5\" (UniqueName: \"kubernetes.io/projected/1e42281e-20ed-4105-866f-878ffbf6c6eb-kube-api-access-76lt5\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.653937 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-catalog-content\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.713487 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.713533 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:07 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:07 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:07 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.714210 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.755988 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-utilities\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.756140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lt5\" (UniqueName: \"kubernetes.io/projected/1e42281e-20ed-4105-866f-878ffbf6c6eb-kube-api-access-76lt5\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.756215 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-catalog-content\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.756751 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-utilities\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.756909 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-catalog-content\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.784211 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lt5\" (UniqueName: \"kubernetes.io/projected/1e42281e-20ed-4105-866f-878ffbf6c6eb-kube-api-access-76lt5\") pod \"redhat-operators-kkjnl\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.840307 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.862775 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht6h9" event={"ID":"624e9a13-c9c5-4ef3-8628-056bfc65338b","Type":"ContainerDied","Data":"16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e"} Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.862860 4787 generic.go:334] "Generic (PLEG): container finished" podID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerID="16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e" exitCode=0 Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.863196 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht6h9" event={"ID":"624e9a13-c9c5-4ef3-8628-056bfc65338b","Type":"ContainerStarted","Data":"c2980947d8538d4e3d4cb9e62697c884d7b7e8b6d4185152424643323c016212"} Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.873073 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9l2" event={"ID":"264daa3b-f5d9-407d-8835-35845525329a","Type":"ContainerStarted","Data":"34b029af2088374d8be804c9373b55e19265a67e5ebece55f9fd4a6ba2ab1d08"} Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.875626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38edf9c9-43b3-4895-ac38-d33e35dd84ff","Type":"ContainerStarted","Data":"f7112a8124ac99640f54d3193f30b49640a3e16a7e1b6909eba69d818ec9cb7b"} Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.875657 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38edf9c9-43b3-4895-ac38-d33e35dd84ff","Type":"ContainerStarted","Data":"b76d9b5d789e59b56dccfabe215ff4d7bfa2fe6c0b0475134d18f24caf01b1e1"} Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.891141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" event={"ID":"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0","Type":"ContainerStarted","Data":"661df021152bc7f98a651ccf560f2bb03e44a6e4242d1b20ac64c7c4a614aaf1"} Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.891199 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" event={"ID":"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0","Type":"ContainerStarted","Data":"f164a16895fc4826be32f420da3768fd0f293a4a904dce0ed637757e1fbcf513"} Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.902163 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qjd6t"] Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.903362 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.928004 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjd6t"] Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.930642 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.9306204500000002 podStartE2EDuration="1.93062045s" podCreationTimestamp="2026-01-27 07:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:07.924472911 +0000 UTC m=+153.576828413" watchObservedRunningTime="2026-01-27 07:54:07.93062045 +0000 UTC m=+153.582975942" Jan 27 07:54:07 crc kubenswrapper[4787]: I0127 07:54:07.951923 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" podStartSLOduration=128.951897842 podStartE2EDuration="2m8.951897842s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:07.949869026 +0000 UTC m=+153.602224528" watchObservedRunningTime="2026-01-27 07:54:07.951897842 +0000 UTC m=+153.604253334" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.073104 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-catalog-content\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.076061 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-utilities\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.076132 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dfzs\" (UniqueName: \"kubernetes.io/projected/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-kube-api-access-8dfzs\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.183892 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-utilities\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.183964 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dfzs\" (UniqueName: \"kubernetes.io/projected/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-kube-api-access-8dfzs\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.184221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-catalog-content\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.186028 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-catalog-content\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.200973 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-utilities\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.213063 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dfzs\" (UniqueName: \"kubernetes.io/projected/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-kube-api-access-8dfzs\") pod \"redhat-operators-qjd6t\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.214603 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkjnl"] Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.231415 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.317534 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.387361 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r6l9\" (UniqueName: \"kubernetes.io/projected/152af819-6586-4345-b2ef-cb8ad845a6b1-kube-api-access-9r6l9\") pod \"152af819-6586-4345-b2ef-cb8ad845a6b1\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.387571 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/152af819-6586-4345-b2ef-cb8ad845a6b1-config-volume\") pod \"152af819-6586-4345-b2ef-cb8ad845a6b1\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.387648 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/152af819-6586-4345-b2ef-cb8ad845a6b1-secret-volume\") pod \"152af819-6586-4345-b2ef-cb8ad845a6b1\" (UID: \"152af819-6586-4345-b2ef-cb8ad845a6b1\") " Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.388444 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/152af819-6586-4345-b2ef-cb8ad845a6b1-config-volume" (OuterVolumeSpecName: "config-volume") pod "152af819-6586-4345-b2ef-cb8ad845a6b1" (UID: "152af819-6586-4345-b2ef-cb8ad845a6b1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.392935 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152af819-6586-4345-b2ef-cb8ad845a6b1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "152af819-6586-4345-b2ef-cb8ad845a6b1" (UID: "152af819-6586-4345-b2ef-cb8ad845a6b1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.398994 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152af819-6586-4345-b2ef-cb8ad845a6b1-kube-api-access-9r6l9" (OuterVolumeSpecName: "kube-api-access-9r6l9") pod "152af819-6586-4345-b2ef-cb8ad845a6b1" (UID: "152af819-6586-4345-b2ef-cb8ad845a6b1"). InnerVolumeSpecName "kube-api-access-9r6l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.489352 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/152af819-6586-4345-b2ef-cb8ad845a6b1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.489927 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r6l9\" (UniqueName: \"kubernetes.io/projected/152af819-6586-4345-b2ef-cb8ad845a6b1-kube-api-access-9r6l9\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.490017 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/152af819-6586-4345-b2ef-cb8ad845a6b1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.508109 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjd6t"] Jan 27 07:54:08 crc kubenswrapper[4787]: W0127 07:54:08.519153 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5e46e1_1252_4a0a_ad11_8d4236f3759b.slice/crio-a7fc73f366dad2ee2177e45cec1c07c9411e1ae51b5099acd00178407f132bd0 WatchSource:0}: Error finding container a7fc73f366dad2ee2177e45cec1c07c9411e1ae51b5099acd00178407f132bd0: Status 404 returned error can't find the container with id a7fc73f366dad2ee2177e45cec1c07c9411e1ae51b5099acd00178407f132bd0 Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.722211 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:08 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:08 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:08 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.722302 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.929006 4787 generic.go:334] "Generic (PLEG): container finished" podID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerID="71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2" exitCode=0 Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.929377 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkjnl" event={"ID":"1e42281e-20ed-4105-866f-878ffbf6c6eb","Type":"ContainerDied","Data":"71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2"} Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.929490 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkjnl" event={"ID":"1e42281e-20ed-4105-866f-878ffbf6c6eb","Type":"ContainerStarted","Data":"69a14500beb72f949103663a0b1a2f611ef846aca8d992b4feec769e4103a913"} Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.940058 4787 generic.go:334] "Generic (PLEG): container finished" podID="38edf9c9-43b3-4895-ac38-d33e35dd84ff" containerID="f7112a8124ac99640f54d3193f30b49640a3e16a7e1b6909eba69d818ec9cb7b" exitCode=0 Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.940231 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38edf9c9-43b3-4895-ac38-d33e35dd84ff","Type":"ContainerDied","Data":"f7112a8124ac99640f54d3193f30b49640a3e16a7e1b6909eba69d818ec9cb7b"} Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.952057 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" event={"ID":"152af819-6586-4345-b2ef-cb8ad845a6b1","Type":"ContainerDied","Data":"dd1d962cbba22622cc8f00742deb5445d3f8c1e1a6d321d913c3f39997ffb3b5"} Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.952127 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd1d962cbba22622cc8f00742deb5445d3f8c1e1a6d321d913c3f39997ffb3b5" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.952084 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-p955p" Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.959323 4787 generic.go:334] "Generic (PLEG): container finished" podID="264daa3b-f5d9-407d-8835-35845525329a" containerID="fc80e896d018112629afc7413a0e19d17cf029077e3a9fbb67d67c7d973bf040" exitCode=0 Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.959426 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9l2" event={"ID":"264daa3b-f5d9-407d-8835-35845525329a","Type":"ContainerDied","Data":"fc80e896d018112629afc7413a0e19d17cf029077e3a9fbb67d67c7d973bf040"} Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.972374 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjd6t" event={"ID":"5f5e46e1-1252-4a0a-ad11-8d4236f3759b","Type":"ContainerStarted","Data":"a7fc73f366dad2ee2177e45cec1c07c9411e1ae51b5099acd00178407f132bd0"} Jan 27 07:54:08 crc kubenswrapper[4787]: I0127 07:54:08.972426 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:09 crc kubenswrapper[4787]: I0127 07:54:09.713332 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:09 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:09 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:09 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:09 crc kubenswrapper[4787]: I0127 07:54:09.713833 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.006475 4787 generic.go:334] "Generic (PLEG): container finished" podID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerID="e485664fbc4a751f0a85db31317555b712de9d281e0fb607d7c774fccc36329f" exitCode=0 Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.008840 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjd6t" event={"ID":"5f5e46e1-1252-4a0a-ad11-8d4236f3759b","Type":"ContainerDied","Data":"e485664fbc4a751f0a85db31317555b712de9d281e0fb607d7c774fccc36329f"} Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.461356 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.530274 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kubelet-dir\") pod \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\" (UID: \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\") " Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.530467 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kube-api-access\") pod \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\" (UID: \"38edf9c9-43b3-4895-ac38-d33e35dd84ff\") " Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.531066 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "38edf9c9-43b3-4895-ac38-d33e35dd84ff" (UID: "38edf9c9-43b3-4895-ac38-d33e35dd84ff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.596644 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:54:10 crc kubenswrapper[4787]: E0127 07:54:10.597145 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152af819-6586-4345-b2ef-cb8ad845a6b1" containerName="collect-profiles" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.597161 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="152af819-6586-4345-b2ef-cb8ad845a6b1" containerName="collect-profiles" Jan 27 07:54:10 crc kubenswrapper[4787]: E0127 07:54:10.597189 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38edf9c9-43b3-4895-ac38-d33e35dd84ff" containerName="pruner" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.597197 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38edf9c9-43b3-4895-ac38-d33e35dd84ff" containerName="pruner" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.597332 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="152af819-6586-4345-b2ef-cb8ad845a6b1" containerName="collect-profiles" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.597349 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="38edf9c9-43b3-4895-ac38-d33e35dd84ff" containerName="pruner" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.598022 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.598115 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.601915 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.602830 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.631706 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.637294 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "38edf9c9-43b3-4895-ac38-d33e35dd84ff" (UID: "38edf9c9-43b3-4895-ac38-d33e35dd84ff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.715165 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:10 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:10 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:10 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.715235 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.733716 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.733789 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.733928 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38edf9c9-43b3-4895-ac38-d33e35dd84ff-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.835207 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.835336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.835418 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.854070 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:10 crc kubenswrapper[4787]: I0127 07:54:10.929323 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:11 crc kubenswrapper[4787]: I0127 07:54:11.024508 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38edf9c9-43b3-4895-ac38-d33e35dd84ff","Type":"ContainerDied","Data":"b76d9b5d789e59b56dccfabe215ff4d7bfa2fe6c0b0475134d18f24caf01b1e1"} Jan 27 07:54:11 crc kubenswrapper[4787]: I0127 07:54:11.024568 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76d9b5d789e59b56dccfabe215ff4d7bfa2fe6c0b0475134d18f24caf01b1e1" Jan 27 07:54:11 crc kubenswrapper[4787]: I0127 07:54:11.024641 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:54:11 crc kubenswrapper[4787]: I0127 07:54:11.181070 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:54:11 crc kubenswrapper[4787]: W0127 07:54:11.187258 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1ad51913_b8b4_46f0_90d1_8334a8ce8ebe.slice/crio-e04d4f313a2745357a590b2ccf3237015c1d625017bfc0adcfd3738e7cba346a WatchSource:0}: Error finding container e04d4f313a2745357a590b2ccf3237015c1d625017bfc0adcfd3738e7cba346a: Status 404 returned error can't find the container with id e04d4f313a2745357a590b2ccf3237015c1d625017bfc0adcfd3738e7cba346a Jan 27 07:54:11 crc kubenswrapper[4787]: I0127 07:54:11.717043 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:11 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:11 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:11 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:11 crc kubenswrapper[4787]: I0127 07:54:11.717132 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:11 crc kubenswrapper[4787]: I0127 07:54:11.969500 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:54:11 crc kubenswrapper[4787]: I0127 07:54:11.976986 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-zkx8b" Jan 27 07:54:12 crc kubenswrapper[4787]: I0127 07:54:12.074474 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe","Type":"ContainerStarted","Data":"efc3a4d788a7f2ef075bd61be09edf228fb114c6649606e1ca368b5477ca776f"} Jan 27 07:54:12 crc kubenswrapper[4787]: I0127 07:54:12.074947 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe","Type":"ContainerStarted","Data":"e04d4f313a2745357a590b2ccf3237015c1d625017bfc0adcfd3738e7cba346a"} Jan 27 07:54:12 crc kubenswrapper[4787]: I0127 07:54:12.095370 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.095345948 podStartE2EDuration="2.095345948s" podCreationTimestamp="2026-01-27 07:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:12.092866056 +0000 UTC m=+157.745221548" watchObservedRunningTime="2026-01-27 07:54:12.095345948 +0000 UTC m=+157.747701440" Jan 27 07:54:12 crc kubenswrapper[4787]: I0127 07:54:12.714250 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:12 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:12 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:12 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:12 crc kubenswrapper[4787]: I0127 07:54:12.714353 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:12 crc kubenswrapper[4787]: I0127 07:54:12.745843 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rkzlx" Jan 27 07:54:13 crc kubenswrapper[4787]: I0127 07:54:13.083478 4787 generic.go:334] "Generic (PLEG): container finished" podID="1ad51913-b8b4-46f0-90d1-8334a8ce8ebe" containerID="efc3a4d788a7f2ef075bd61be09edf228fb114c6649606e1ca368b5477ca776f" exitCode=0 Jan 27 07:54:13 crc kubenswrapper[4787]: I0127 07:54:13.083526 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe","Type":"ContainerDied","Data":"efc3a4d788a7f2ef075bd61be09edf228fb114c6649606e1ca368b5477ca776f"} Jan 27 07:54:13 crc kubenswrapper[4787]: I0127 07:54:13.714598 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:13 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:13 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:13 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:13 crc kubenswrapper[4787]: I0127 07:54:13.715140 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:14 crc kubenswrapper[4787]: I0127 07:54:14.713477 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:14 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:14 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:14 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:14 crc kubenswrapper[4787]: I0127 07:54:14.714026 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:15 crc kubenswrapper[4787]: I0127 07:54:15.713932 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:15 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:15 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:15 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:15 crc kubenswrapper[4787]: I0127 07:54:15.714333 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:16 crc kubenswrapper[4787]: I0127 07:54:16.713650 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:16 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:16 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:16 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:16 crc kubenswrapper[4787]: I0127 07:54:16.713748 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:16 crc kubenswrapper[4787]: I0127 07:54:16.741175 4787 patch_prober.go:28] interesting pod/console-f9d7485db-qptnb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 27 07:54:16 crc kubenswrapper[4787]: I0127 07:54:16.741274 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qptnb" podUID="b070600e-8a6f-4bb9-a1c2-e763f55d90eb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 27 07:54:16 crc kubenswrapper[4787]: I0127 07:54:16.805885 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-xwx4w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:54:16 crc kubenswrapper[4787]: I0127 07:54:16.805985 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xwx4w" podUID="301e3f1a-19c5-47a7-b85d-81676098f971" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:54:16 crc kubenswrapper[4787]: I0127 07:54:16.806033 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-xwx4w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 07:54:16 crc kubenswrapper[4787]: I0127 07:54:16.806103 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xwx4w" podUID="301e3f1a-19c5-47a7-b85d-81676098f971" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 07:54:17 crc kubenswrapper[4787]: I0127 07:54:17.714202 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:17 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:17 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:17 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:17 crc kubenswrapper[4787]: I0127 07:54:17.714736 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:18 crc kubenswrapper[4787]: I0127 07:54:18.713488 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:18 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:18 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:18 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:18 crc kubenswrapper[4787]: I0127 07:54:18.713586 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:19 crc kubenswrapper[4787]: I0127 07:54:19.712824 4787 patch_prober.go:28] interesting pod/router-default-5444994796-jzdds container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:54:19 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Jan 27 07:54:19 crc kubenswrapper[4787]: [+]process-running ok Jan 27 07:54:19 crc kubenswrapper[4787]: healthz check failed Jan 27 07:54:19 crc kubenswrapper[4787]: I0127 07:54:19.712881 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jzdds" podUID="5f4a2300-d512-490d-a876-6ad03f0c2f31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:54:19 crc kubenswrapper[4787]: I0127 07:54:19.910081 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.008421 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kubelet-dir\") pod \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\" (UID: \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\") " Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.009132 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kube-api-access\") pod \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\" (UID: \"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe\") " Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.008882 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1ad51913-b8b4-46f0-90d1-8334a8ce8ebe" (UID: "1ad51913-b8b4-46f0-90d1-8334a8ce8ebe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.009481 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.017090 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1ad51913-b8b4-46f0-90d1-8334a8ce8ebe" (UID: "1ad51913-b8b4-46f0-90d1-8334a8ce8ebe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.110745 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ad51913-b8b4-46f0-90d1-8334a8ce8ebe-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.139934 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1ad51913-b8b4-46f0-90d1-8334a8ce8ebe","Type":"ContainerDied","Data":"e04d4f313a2745357a590b2ccf3237015c1d625017bfc0adcfd3738e7cba346a"} Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.139991 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e04d4f313a2745357a590b2ccf3237015c1d625017bfc0adcfd3738e7cba346a" Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.140000 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.717660 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:54:20 crc kubenswrapper[4787]: I0127 07:54:20.734925 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jzdds" Jan 27 07:54:21 crc kubenswrapper[4787]: I0127 07:54:21.736248 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:54:21 crc kubenswrapper[4787]: I0127 07:54:21.743579 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3969f21f-ab36-49b4-9a9c-02cf19e65ad0-metrics-certs\") pod \"network-metrics-daemon-vws75\" (UID: \"3969f21f-ab36-49b4-9a9c-02cf19e65ad0\") " pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:54:21 crc kubenswrapper[4787]: I0127 07:54:21.794288 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vws75" Jan 27 07:54:22 crc kubenswrapper[4787]: I0127 07:54:22.414411 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g44f6"] Jan 27 07:54:22 crc kubenswrapper[4787]: I0127 07:54:22.414689 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" podUID="ca4c081d-896c-4cc8-9656-59364376de35" containerName="controller-manager" containerID="cri-o://18d9ef1e7d3e588a5d939868f7c0a651f90760ac04dbbd15cc83a8ed490076a9" gracePeriod=30 Jan 27 07:54:22 crc kubenswrapper[4787]: I0127 07:54:22.435081 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t"] Jan 27 07:54:22 crc kubenswrapper[4787]: I0127 07:54:22.435356 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" podUID="0259775f-1fef-486a-bc17-4638e38ed83f" containerName="route-controller-manager" containerID="cri-o://2b47bd63962bfe1d1f2626a0146931ca7d71c50c1e9a2b870fbb6f7f3fafcb6f" gracePeriod=30 Jan 27 07:54:22 crc kubenswrapper[4787]: I0127 07:54:22.823311 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:54:22 crc kubenswrapper[4787]: I0127 07:54:22.823406 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:54:23 crc kubenswrapper[4787]: I0127 07:54:23.160978 4787 generic.go:334] "Generic (PLEG): container finished" podID="0259775f-1fef-486a-bc17-4638e38ed83f" containerID="2b47bd63962bfe1d1f2626a0146931ca7d71c50c1e9a2b870fbb6f7f3fafcb6f" exitCode=0 Jan 27 07:54:23 crc kubenswrapper[4787]: I0127 07:54:23.161100 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" event={"ID":"0259775f-1fef-486a-bc17-4638e38ed83f","Type":"ContainerDied","Data":"2b47bd63962bfe1d1f2626a0146931ca7d71c50c1e9a2b870fbb6f7f3fafcb6f"} Jan 27 07:54:23 crc kubenswrapper[4787]: I0127 07:54:23.163478 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca4c081d-896c-4cc8-9656-59364376de35" containerID="18d9ef1e7d3e588a5d939868f7c0a651f90760ac04dbbd15cc83a8ed490076a9" exitCode=0 Jan 27 07:54:23 crc kubenswrapper[4787]: I0127 07:54:23.163526 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" event={"ID":"ca4c081d-896c-4cc8-9656-59364376de35","Type":"ContainerDied","Data":"18d9ef1e7d3e588a5d939868f7c0a651f90760ac04dbbd15cc83a8ed490076a9"} Jan 27 07:54:26 crc kubenswrapper[4787]: I0127 07:54:26.699292 4787 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dlz8t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 07:54:26 crc kubenswrapper[4787]: I0127 07:54:26.699848 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" podUID="0259775f-1fef-486a-bc17-4638e38ed83f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 07:54:26 crc kubenswrapper[4787]: I0127 07:54:26.745741 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:54:26 crc kubenswrapper[4787]: I0127 07:54:26.750349 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qptnb" Jan 27 07:54:26 crc kubenswrapper[4787]: I0127 07:54:26.816340 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xwx4w" Jan 27 07:54:26 crc kubenswrapper[4787]: I0127 07:54:26.869626 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:54:26 crc kubenswrapper[4787]: I0127 07:54:26.928924 4787 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g44f6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 07:54:26 crc kubenswrapper[4787]: I0127 07:54:26.929007 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" podUID="ca4c081d-896c-4cc8-9656-59364376de35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 07:54:32 crc kubenswrapper[4787]: E0127 07:54:32.963714 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 07:54:32 crc kubenswrapper[4787]: E0127 07:54:32.964999 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkvcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ht6h9_openshift-marketplace(624e9a13-c9c5-4ef3-8628-056bfc65338b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:54:32 crc kubenswrapper[4787]: E0127 07:54:32.966226 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ht6h9" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.352181 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xj5z" Jan 27 07:54:37 crc kubenswrapper[4787]: E0127 07:54:37.571920 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ht6h9" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.622669 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.629363 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.674426 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm"] Jan 27 07:54:37 crc kubenswrapper[4787]: E0127 07:54:37.674903 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad51913-b8b4-46f0-90d1-8334a8ce8ebe" containerName="pruner" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.674932 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad51913-b8b4-46f0-90d1-8334a8ce8ebe" containerName="pruner" Jan 27 07:54:37 crc kubenswrapper[4787]: E0127 07:54:37.674950 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4c081d-896c-4cc8-9656-59364376de35" containerName="controller-manager" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.674958 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4c081d-896c-4cc8-9656-59364376de35" containerName="controller-manager" Jan 27 07:54:37 crc kubenswrapper[4787]: E0127 07:54:37.674977 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0259775f-1fef-486a-bc17-4638e38ed83f" containerName="route-controller-manager" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.674986 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0259775f-1fef-486a-bc17-4638e38ed83f" containerName="route-controller-manager" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.675091 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0259775f-1fef-486a-bc17-4638e38ed83f" containerName="route-controller-manager" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.675101 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad51913-b8b4-46f0-90d1-8334a8ce8ebe" containerName="pruner" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.675113 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4c081d-896c-4cc8-9656-59364376de35" containerName="controller-manager" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.675723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.677751 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm"] Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.699967 4787 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dlz8t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.700039 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" podUID="0259775f-1fef-486a-bc17-4638e38ed83f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stktz\" (UniqueName: \"kubernetes.io/projected/0259775f-1fef-486a-bc17-4638e38ed83f-kube-api-access-stktz\") pod \"0259775f-1fef-486a-bc17-4638e38ed83f\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717168 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-config\") pod \"ca4c081d-896c-4cc8-9656-59364376de35\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717210 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-client-ca\") pod \"0259775f-1fef-486a-bc17-4638e38ed83f\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717258 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-client-ca\") pod \"ca4c081d-896c-4cc8-9656-59364376de35\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717288 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-proxy-ca-bundles\") pod \"ca4c081d-896c-4cc8-9656-59364376de35\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717356 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0259775f-1fef-486a-bc17-4638e38ed83f-serving-cert\") pod \"0259775f-1fef-486a-bc17-4638e38ed83f\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717393 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4c081d-896c-4cc8-9656-59364376de35-serving-cert\") pod \"ca4c081d-896c-4cc8-9656-59364376de35\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717441 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-config\") pod \"0259775f-1fef-486a-bc17-4638e38ed83f\" (UID: \"0259775f-1fef-486a-bc17-4638e38ed83f\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.717493 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxmsq\" (UniqueName: \"kubernetes.io/projected/ca4c081d-896c-4cc8-9656-59364376de35-kube-api-access-kxmsq\") pod \"ca4c081d-896c-4cc8-9656-59364376de35\" (UID: \"ca4c081d-896c-4cc8-9656-59364376de35\") " Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.720049 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-config" (OuterVolumeSpecName: "config") pod "ca4c081d-896c-4cc8-9656-59364376de35" (UID: "ca4c081d-896c-4cc8-9656-59364376de35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.720232 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-client-ca" (OuterVolumeSpecName: "client-ca") pod "0259775f-1fef-486a-bc17-4638e38ed83f" (UID: "0259775f-1fef-486a-bc17-4638e38ed83f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.720367 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-config" (OuterVolumeSpecName: "config") pod "0259775f-1fef-486a-bc17-4638e38ed83f" (UID: "0259775f-1fef-486a-bc17-4638e38ed83f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.722399 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca4c081d-896c-4cc8-9656-59364376de35" (UID: "ca4c081d-896c-4cc8-9656-59364376de35"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.723541 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ca4c081d-896c-4cc8-9656-59364376de35" (UID: "ca4c081d-896c-4cc8-9656-59364376de35"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.726727 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4c081d-896c-4cc8-9656-59364376de35-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca4c081d-896c-4cc8-9656-59364376de35" (UID: "ca4c081d-896c-4cc8-9656-59364376de35"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.728318 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4c081d-896c-4cc8-9656-59364376de35-kube-api-access-kxmsq" (OuterVolumeSpecName: "kube-api-access-kxmsq") pod "ca4c081d-896c-4cc8-9656-59364376de35" (UID: "ca4c081d-896c-4cc8-9656-59364376de35"). InnerVolumeSpecName "kube-api-access-kxmsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.728585 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0259775f-1fef-486a-bc17-4638e38ed83f-kube-api-access-stktz" (OuterVolumeSpecName: "kube-api-access-stktz") pod "0259775f-1fef-486a-bc17-4638e38ed83f" (UID: "0259775f-1fef-486a-bc17-4638e38ed83f"). InnerVolumeSpecName "kube-api-access-stktz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.728617 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0259775f-1fef-486a-bc17-4638e38ed83f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0259775f-1fef-486a-bc17-4638e38ed83f" (UID: "0259775f-1fef-486a-bc17-4638e38ed83f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819168 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-config\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819274 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-client-ca\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819316 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-serving-cert\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819361 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlktw\" (UniqueName: \"kubernetes.io/projected/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-kube-api-access-vlktw\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819415 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819428 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819441 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819454 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0259775f-1fef-486a-bc17-4638e38ed83f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819464 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca4c081d-896c-4cc8-9656-59364376de35-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819475 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0259775f-1fef-486a-bc17-4638e38ed83f-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819484 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxmsq\" (UniqueName: \"kubernetes.io/projected/ca4c081d-896c-4cc8-9656-59364376de35-kube-api-access-kxmsq\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819497 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stktz\" (UniqueName: \"kubernetes.io/projected/0259775f-1fef-486a-bc17-4638e38ed83f-kube-api-access-stktz\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.819508 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4c081d-896c-4cc8-9656-59364376de35-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.920836 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlktw\" (UniqueName: \"kubernetes.io/projected/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-kube-api-access-vlktw\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.920920 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-config\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.921015 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-client-ca\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.921064 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-serving-cert\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.923129 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-config\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.923258 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-client-ca\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.928744 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-serving-cert\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.928839 4787 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g44f6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.928890 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" podUID="ca4c081d-896c-4cc8-9656-59364376de35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 07:54:37 crc kubenswrapper[4787]: I0127 07:54:37.941398 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlktw\" (UniqueName: \"kubernetes.io/projected/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-kube-api-access-vlktw\") pod \"route-controller-manager-cff864fb9-zqlsm\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.008449 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.260729 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" event={"ID":"ca4c081d-896c-4cc8-9656-59364376de35","Type":"ContainerDied","Data":"7128b0f800ba7ef260edfd0bbe8553abf59eec44713d7f401d1f64afeed28b63"} Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.260825 4787 scope.go:117] "RemoveContainer" containerID="18d9ef1e7d3e588a5d939868f7c0a651f90760ac04dbbd15cc83a8ed490076a9" Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.260837 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g44f6" Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.267744 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" event={"ID":"0259775f-1fef-486a-bc17-4638e38ed83f","Type":"ContainerDied","Data":"41afcd3d4f1c7a6dc4b744f46a303d0903dfaf2cc98bd0b97df786bbe6ad0ce1"} Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.267884 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t" Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.303537 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g44f6"] Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.308721 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g44f6"] Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.311538 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t"] Jan 27 07:54:38 crc kubenswrapper[4787]: I0127 07:54:38.313960 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dlz8t"] Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.085189 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0259775f-1fef-486a-bc17-4638e38ed83f" path="/var/lib/kubelet/pods/0259775f-1fef-486a-bc17-4638e38ed83f/volumes" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.085860 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4c081d-896c-4cc8-9656-59364376de35" path="/var/lib/kubelet/pods/ca4c081d-896c-4cc8-9656-59364376de35/volumes" Jan 27 07:54:39 crc kubenswrapper[4787]: E0127 07:54:39.399407 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 07:54:39 crc kubenswrapper[4787]: E0127 07:54:39.399650 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5r7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cgwgp_openshift-marketplace(0368ea79-94a8-42e3-8986-dddbec83d755): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:54:39 crc kubenswrapper[4787]: E0127 07:54:39.402764 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cgwgp" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" Jan 27 07:54:39 crc kubenswrapper[4787]: E0127 07:54:39.402992 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 07:54:39 crc kubenswrapper[4787]: E0127 07:54:39.403223 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tmpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tnxzl_openshift-marketplace(44cc5af2-0636-4589-a988-e7e32bfea075): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:54:39 crc kubenswrapper[4787]: E0127 07:54:39.404413 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tnxzl" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.803415 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7666b944cf-w5xrh"] Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.804541 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.806631 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.806852 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.807184 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.807336 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.807673 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.807749 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.815512 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7666b944cf-w5xrh"] Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.815535 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.849041 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-client-ca\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.849089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c5m4\" (UniqueName: \"kubernetes.io/projected/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-kube-api-access-5c5m4\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.849152 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-config\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.849206 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-serving-cert\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.849235 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-proxy-ca-bundles\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.950892 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-client-ca\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.950959 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c5m4\" (UniqueName: \"kubernetes.io/projected/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-kube-api-access-5c5m4\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.951060 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-config\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.951145 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-serving-cert\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.951174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-proxy-ca-bundles\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.952123 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-client-ca\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.952438 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-proxy-ca-bundles\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.953878 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-config\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.962659 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-serving-cert\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:39 crc kubenswrapper[4787]: I0127 07:54:39.967880 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c5m4\" (UniqueName: \"kubernetes.io/projected/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-kube-api-access-5c5m4\") pod \"controller-manager-7666b944cf-w5xrh\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:40 crc kubenswrapper[4787]: I0127 07:54:40.144325 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:42 crc kubenswrapper[4787]: I0127 07:54:42.405632 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7666b944cf-w5xrh"] Jan 27 07:54:42 crc kubenswrapper[4787]: I0127 07:54:42.506301 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm"] Jan 27 07:54:43 crc kubenswrapper[4787]: E0127 07:54:43.081999 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cgwgp" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" Jan 27 07:54:43 crc kubenswrapper[4787]: E0127 07:54:43.082062 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tnxzl" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" Jan 27 07:54:43 crc kubenswrapper[4787]: I0127 07:54:43.103267 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:54:43 crc kubenswrapper[4787]: E0127 07:54:43.109281 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 07:54:43 crc kubenswrapper[4787]: E0127 07:54:43.109879 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-76lt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kkjnl_openshift-marketplace(1e42281e-20ed-4105-866f-878ffbf6c6eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:54:43 crc kubenswrapper[4787]: E0127 07:54:43.111075 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kkjnl" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" Jan 27 07:54:43 crc kubenswrapper[4787]: I0127 07:54:43.280972 4787 scope.go:117] "RemoveContainer" containerID="2b47bd63962bfe1d1f2626a0146931ca7d71c50c1e9a2b870fbb6f7f3fafcb6f" Jan 27 07:54:43 crc kubenswrapper[4787]: E0127 07:54:43.322300 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kkjnl" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" Jan 27 07:54:43 crc kubenswrapper[4787]: I0127 07:54:43.404359 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vws75"] Jan 27 07:54:43 crc kubenswrapper[4787]: I0127 07:54:43.569312 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7666b944cf-w5xrh"] Jan 27 07:54:43 crc kubenswrapper[4787]: I0127 07:54:43.733058 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm"] Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.317809 4787 generic.go:334] "Generic (PLEG): container finished" podID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerID="e44397098e422e87e1da7b248b23d5f52f1cfcc38a65342a95ecf9164e6322e7" exitCode=0 Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.317882 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-672cg" event={"ID":"2a1ecf2a-6b91-48d7-873e-918dd4e045fc","Type":"ContainerDied","Data":"e44397098e422e87e1da7b248b23d5f52f1cfcc38a65342a95ecf9164e6322e7"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.321162 4787 generic.go:334] "Generic (PLEG): container finished" podID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerID="629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d" exitCode=0 Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.321254 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npc57" event={"ID":"7af45749-cd7f-490a-b2f6-bf979ea6467e","Type":"ContainerDied","Data":"629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.324593 4787 generic.go:334] "Generic (PLEG): container finished" podID="264daa3b-f5d9-407d-8835-35845525329a" containerID="1b10219e013f243bd9fe375bd99faab099e08f41d538f33974e456fa1220d7b7" exitCode=0 Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.324706 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9l2" event={"ID":"264daa3b-f5d9-407d-8835-35845525329a","Type":"ContainerDied","Data":"1b10219e013f243bd9fe375bd99faab099e08f41d538f33974e456fa1220d7b7"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.327484 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjd6t" event={"ID":"5f5e46e1-1252-4a0a-ad11-8d4236f3759b","Type":"ContainerStarted","Data":"1309a0eb542609f82174c456dbe308ca94a8f5093a7b41ff79e5822733bbd396"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.329571 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" event={"ID":"9af0e159-7fb4-47fe-8d5a-fd19fee5a744","Type":"ContainerStarted","Data":"e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.329645 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.329662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" event={"ID":"9af0e159-7fb4-47fe-8d5a-fd19fee5a744","Type":"ContainerStarted","Data":"884316bdef089f92bdbee924916583c28a60a32993cc5267435c350c9bb28052"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.329975 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" podUID="9af0e159-7fb4-47fe-8d5a-fd19fee5a744" containerName="controller-manager" containerID="cri-o://e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736" gracePeriod=30 Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.332169 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" event={"ID":"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c","Type":"ContainerStarted","Data":"0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.332204 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" event={"ID":"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c","Type":"ContainerStarted","Data":"1e78828407f5d5d949efad0e2b9135a9aabece8423effd00f64caebe0c4de681"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.332294 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" podUID="1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" containerName="route-controller-manager" containerID="cri-o://0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800" gracePeriod=30 Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.332380 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.342607 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vws75" event={"ID":"3969f21f-ab36-49b4-9a9c-02cf19e65ad0","Type":"ContainerStarted","Data":"385ed217c7bbaf2e23de994a50442d796713a2dd7491664da043cf4eaa88f48f"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.342658 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vws75" event={"ID":"3969f21f-ab36-49b4-9a9c-02cf19e65ad0","Type":"ContainerStarted","Data":"f297f5ad07c994b453d6aceadb7664527c76c902648e36505ce47ccbd9bf9b79"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.342668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vws75" event={"ID":"3969f21f-ab36-49b4-9a9c-02cf19e65ad0","Type":"ContainerStarted","Data":"f9ba1717fd7cd617b8a2ba404bfc59d5b5cb58688f589de88190478a9ee60ca6"} Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.349881 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.389612 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vws75" podStartSLOduration=165.389592315 podStartE2EDuration="2m45.389592315s" podCreationTimestamp="2026-01-27 07:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:44.387989859 +0000 UTC m=+190.040345341" watchObservedRunningTime="2026-01-27 07:54:44.389592315 +0000 UTC m=+190.041947797" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.414502 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" podStartSLOduration=22.414481615 podStartE2EDuration="22.414481615s" podCreationTimestamp="2026-01-27 07:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:44.410374141 +0000 UTC m=+190.062729633" watchObservedRunningTime="2026-01-27 07:54:44.414481615 +0000 UTC m=+190.066837107" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.432792 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" podStartSLOduration=22.432762523 podStartE2EDuration="22.432762523s" podCreationTimestamp="2026-01-27 07:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:44.428974186 +0000 UTC m=+190.081329678" watchObservedRunningTime="2026-01-27 07:54:44.432762523 +0000 UTC m=+190.085118005" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.527501 4787 patch_prober.go:28] interesting pod/route-controller-manager-cff864fb9-zqlsm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:46314->10.217.0.54:8443: read: connection reset by peer" start-of-body= Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.527589 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" podUID="1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:46314->10.217.0.54:8443: read: connection reset by peer" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.762636 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.782854 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-cff864fb9-zqlsm_1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c/route-controller-manager/0.log" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.783008 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.829893 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-client-ca\") pod \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.829966 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-serving-cert\") pod \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.829997 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c5m4\" (UniqueName: \"kubernetes.io/projected/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-kube-api-access-5c5m4\") pod \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.830068 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-serving-cert\") pod \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.830158 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-client-ca\") pod \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.830187 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-config\") pod \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.830236 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-proxy-ca-bundles\") pod \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\" (UID: \"9af0e159-7fb4-47fe-8d5a-fd19fee5a744\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.830282 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-config\") pod \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.830338 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlktw\" (UniqueName: \"kubernetes.io/projected/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-kube-api-access-vlktw\") pod \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\" (UID: \"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c\") " Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.832455 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-config" (OuterVolumeSpecName: "config") pod "9af0e159-7fb4-47fe-8d5a-fd19fee5a744" (UID: "9af0e159-7fb4-47fe-8d5a-fd19fee5a744"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.832495 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9af0e159-7fb4-47fe-8d5a-fd19fee5a744" (UID: "9af0e159-7fb4-47fe-8d5a-fd19fee5a744"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.832930 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-client-ca" (OuterVolumeSpecName: "client-ca") pod "9af0e159-7fb4-47fe-8d5a-fd19fee5a744" (UID: "9af0e159-7fb4-47fe-8d5a-fd19fee5a744"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.835833 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-config" (OuterVolumeSpecName: "config") pod "1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" (UID: "1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.836026 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-client-ca" (OuterVolumeSpecName: "client-ca") pod "1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" (UID: "1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.838884 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7598866b8d-sz455"] Jan 27 07:54:44 crc kubenswrapper[4787]: E0127 07:54:44.839298 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" containerName="route-controller-manager" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.839312 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" containerName="route-controller-manager" Jan 27 07:54:44 crc kubenswrapper[4787]: E0127 07:54:44.839328 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af0e159-7fb4-47fe-8d5a-fd19fee5a744" containerName="controller-manager" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.839336 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af0e159-7fb4-47fe-8d5a-fd19fee5a744" containerName="controller-manager" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.839457 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" containerName="route-controller-manager" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.839474 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af0e159-7fb4-47fe-8d5a-fd19fee5a744" containerName="controller-manager" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.840025 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.841326 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-kube-api-access-5c5m4" (OuterVolumeSpecName: "kube-api-access-5c5m4") pod "9af0e159-7fb4-47fe-8d5a-fd19fee5a744" (UID: "9af0e159-7fb4-47fe-8d5a-fd19fee5a744"). InnerVolumeSpecName "kube-api-access-5c5m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.841382 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-kube-api-access-vlktw" (OuterVolumeSpecName: "kube-api-access-vlktw") pod "1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" (UID: "1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c"). InnerVolumeSpecName "kube-api-access-vlktw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.842071 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" (UID: "1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.845621 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9af0e159-7fb4-47fe-8d5a-fd19fee5a744" (UID: "9af0e159-7fb4-47fe-8d5a-fd19fee5a744"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.854075 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7598866b8d-sz455"] Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932570 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dnj\" (UniqueName: \"kubernetes.io/projected/1a623b08-d78c-4d91-bff0-6535d713eb2c-kube-api-access-z6dnj\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932642 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a623b08-d78c-4d91-bff0-6535d713eb2c-serving-cert\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-config\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932717 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-proxy-ca-bundles\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932741 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-client-ca\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932813 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932827 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932839 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932865 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932879 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932892 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlktw\" (UniqueName: \"kubernetes.io/projected/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-kube-api-access-vlktw\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932907 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932919 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:44 crc kubenswrapper[4787]: I0127 07:54:44.932932 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c5m4\" (UniqueName: \"kubernetes.io/projected/9af0e159-7fb4-47fe-8d5a-fd19fee5a744-kube-api-access-5c5m4\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.034300 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dnj\" (UniqueName: \"kubernetes.io/projected/1a623b08-d78c-4d91-bff0-6535d713eb2c-kube-api-access-z6dnj\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.034368 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a623b08-d78c-4d91-bff0-6535d713eb2c-serving-cert\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.034413 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-config\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.034445 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-proxy-ca-bundles\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.034465 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-client-ca\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.036075 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-client-ca\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.036393 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-config\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.036774 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-proxy-ca-bundles\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.039939 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a623b08-d78c-4d91-bff0-6535d713eb2c-serving-cert\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.055395 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dnj\" (UniqueName: \"kubernetes.io/projected/1a623b08-d78c-4d91-bff0-6535d713eb2c-kube-api-access-z6dnj\") pod \"controller-manager-7598866b8d-sz455\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.164817 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.377092 4787 generic.go:334] "Generic (PLEG): container finished" podID="9af0e159-7fb4-47fe-8d5a-fd19fee5a744" containerID="e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736" exitCode=0 Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.377161 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.377197 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" event={"ID":"9af0e159-7fb4-47fe-8d5a-fd19fee5a744","Type":"ContainerDied","Data":"e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736"} Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.377312 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7666b944cf-w5xrh" event={"ID":"9af0e159-7fb4-47fe-8d5a-fd19fee5a744","Type":"ContainerDied","Data":"884316bdef089f92bdbee924916583c28a60a32993cc5267435c350c9bb28052"} Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.377347 4787 scope.go:117] "RemoveContainer" containerID="e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.382094 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-cff864fb9-zqlsm_1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c/route-controller-manager/0.log" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.382144 4787 generic.go:334] "Generic (PLEG): container finished" podID="1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" containerID="0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800" exitCode=255 Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.382224 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" event={"ID":"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c","Type":"ContainerDied","Data":"0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800"} Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.382262 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" event={"ID":"1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c","Type":"ContainerDied","Data":"1e78828407f5d5d949efad0e2b9135a9aabece8423effd00f64caebe0c4de681"} Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.382224 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.384418 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-672cg" event={"ID":"2a1ecf2a-6b91-48d7-873e-918dd4e045fc","Type":"ContainerStarted","Data":"be7ffa9db32f83f23699c5d5677bd9eb9428eb33ca38c00529df07ae62a31547"} Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.390329 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npc57" event={"ID":"7af45749-cd7f-490a-b2f6-bf979ea6467e","Type":"ContainerStarted","Data":"3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79"} Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.394777 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9l2" event={"ID":"264daa3b-f5d9-407d-8835-35845525329a","Type":"ContainerStarted","Data":"92972139bf2205651890b39b93a8107ed20a0a61ec1230b6775bffc306789804"} Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.397636 4787 generic.go:334] "Generic (PLEG): container finished" podID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerID="1309a0eb542609f82174c456dbe308ca94a8f5093a7b41ff79e5822733bbd396" exitCode=0 Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.397795 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjd6t" event={"ID":"5f5e46e1-1252-4a0a-ad11-8d4236f3759b","Type":"ContainerDied","Data":"1309a0eb542609f82174c456dbe308ca94a8f5093a7b41ff79e5822733bbd396"} Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.407588 4787 scope.go:117] "RemoveContainer" containerID="e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736" Jan 27 07:54:45 crc kubenswrapper[4787]: E0127 07:54:45.409139 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736\": container with ID starting with e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736 not found: ID does not exist" containerID="e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.409197 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736"} err="failed to get container status \"e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736\": rpc error: code = NotFound desc = could not find container \"e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736\": container with ID starting with e3ab5135381411334039023b381683fb63d36e5ab4b56c6f4ac030bf964a1736 not found: ID does not exist" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.409262 4787 scope.go:117] "RemoveContainer" containerID="0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.417186 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-672cg" podStartSLOduration=3.190670525 podStartE2EDuration="41.417163958s" podCreationTimestamp="2026-01-27 07:54:04 +0000 UTC" firstStartedPulling="2026-01-27 07:54:06.854581318 +0000 UTC m=+152.506936810" lastFinishedPulling="2026-01-27 07:54:45.081074751 +0000 UTC m=+190.733430243" observedRunningTime="2026-01-27 07:54:45.416592895 +0000 UTC m=+191.068948407" watchObservedRunningTime="2026-01-27 07:54:45.417163958 +0000 UTC m=+191.069519470" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.438688 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-npc57" podStartSLOduration=3.202571181 podStartE2EDuration="41.43865154s" podCreationTimestamp="2026-01-27 07:54:04 +0000 UTC" firstStartedPulling="2026-01-27 07:54:06.828324421 +0000 UTC m=+152.480679913" lastFinishedPulling="2026-01-27 07:54:45.06440478 +0000 UTC m=+190.716760272" observedRunningTime="2026-01-27 07:54:45.434631407 +0000 UTC m=+191.086986919" watchObservedRunningTime="2026-01-27 07:54:45.43865154 +0000 UTC m=+191.091007032" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.440387 4787 scope.go:117] "RemoveContainer" containerID="0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800" Jan 27 07:54:45 crc kubenswrapper[4787]: E0127 07:54:45.441263 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800\": container with ID starting with 0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800 not found: ID does not exist" containerID="0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.441302 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800"} err="failed to get container status \"0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800\": rpc error: code = NotFound desc = could not find container \"0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800\": container with ID starting with 0b46d3e81ef90d81b3aecbd4aadc3d93c494fbfa41092064abad2218b8d21800 not found: ID does not exist" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.450617 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7666b944cf-w5xrh"] Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.454068 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7666b944cf-w5xrh"] Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.468100 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm"] Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.475475 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cff864fb9-zqlsm"] Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.484270 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hz9l2" podStartSLOduration=3.582884563 podStartE2EDuration="39.484246593s" podCreationTimestamp="2026-01-27 07:54:06 +0000 UTC" firstStartedPulling="2026-01-27 07:54:08.967714522 +0000 UTC m=+154.620070014" lastFinishedPulling="2026-01-27 07:54:44.869076552 +0000 UTC m=+190.521432044" observedRunningTime="2026-01-27 07:54:45.481342026 +0000 UTC m=+191.133697518" watchObservedRunningTime="2026-01-27 07:54:45.484246593 +0000 UTC m=+191.136602095" Jan 27 07:54:45 crc kubenswrapper[4787]: I0127 07:54:45.668206 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7598866b8d-sz455"] Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.406667 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" event={"ID":"1a623b08-d78c-4d91-bff0-6535d713eb2c","Type":"ContainerStarted","Data":"ec5d5918617f4cf77b4f25c6da400fa3ea9bb3863c461eac264ef2c1e57344ea"} Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.406737 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" event={"ID":"1a623b08-d78c-4d91-bff0-6535d713eb2c","Type":"ContainerStarted","Data":"3da2e262993ac94cc9dc72bad86247ba0291e69ae87e9695f743aab86626a5a0"} Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.407069 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.431476 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.436413 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" podStartSLOduration=4.436396311 podStartE2EDuration="4.436396311s" podCreationTimestamp="2026-01-27 07:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:46.433269259 +0000 UTC m=+192.085624761" watchObservedRunningTime="2026-01-27 07:54:46.436396311 +0000 UTC m=+192.088751813" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.808098 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl"] Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.808876 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.811249 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.811686 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.816444 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.816580 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.816664 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.816785 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.827883 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl"] Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.861951 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-config\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.862052 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qplk2\" (UniqueName: \"kubernetes.io/projected/6381e89b-99e4-480e-9ab0-14a99d22b889-kube-api-access-qplk2\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.862087 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6381e89b-99e4-480e-9ab0-14a99d22b889-serving-cert\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.862147 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-client-ca\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.963200 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-config\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.963719 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qplk2\" (UniqueName: \"kubernetes.io/projected/6381e89b-99e4-480e-9ab0-14a99d22b889-kube-api-access-qplk2\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.963763 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6381e89b-99e4-480e-9ab0-14a99d22b889-serving-cert\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.963801 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-client-ca\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.964508 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-config\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.964569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-client-ca\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.971643 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6381e89b-99e4-480e-9ab0-14a99d22b889-serving-cert\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:46 crc kubenswrapper[4787]: I0127 07:54:46.984720 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qplk2\" (UniqueName: \"kubernetes.io/projected/6381e89b-99e4-480e-9ab0-14a99d22b889-kube-api-access-qplk2\") pod \"route-controller-manager-774cb7c4cb-j9mkl\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.084237 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c" path="/var/lib/kubelet/pods/1696f0ba-1f0c-4f1a-8510-bcdaa31fcb1c/volumes" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.085478 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af0e159-7fb4-47fe-8d5a-fd19fee5a744" path="/var/lib/kubelet/pods/9af0e159-7fb4-47fe-8d5a-fd19fee5a744/volumes" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.153410 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.171465 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.172318 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.176503 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.176595 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.191666 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.268387 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd23d06a-7017-4d11-a361-c556d5465959-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd23d06a-7017-4d11-a361-c556d5465959\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.268464 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd23d06a-7017-4d11-a361-c556d5465959-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd23d06a-7017-4d11-a361-c556d5465959\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.274606 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.282743 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.370788 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd23d06a-7017-4d11-a361-c556d5465959-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd23d06a-7017-4d11-a361-c556d5465959\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.371514 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd23d06a-7017-4d11-a361-c556d5465959-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd23d06a-7017-4d11-a361-c556d5465959\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.371693 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd23d06a-7017-4d11-a361-c556d5465959-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd23d06a-7017-4d11-a361-c556d5465959\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.401673 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd23d06a-7017-4d11-a361-c556d5465959-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd23d06a-7017-4d11-a361-c556d5465959\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.442519 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjd6t" event={"ID":"5f5e46e1-1252-4a0a-ad11-8d4236f3759b","Type":"ContainerStarted","Data":"273c3736c041c8021a3e5eb647bdda5506773181a199572370f5be6b85004a04"} Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.469178 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl"] Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.469932 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qjd6t" podStartSLOduration=3.753709912 podStartE2EDuration="40.469662484s" podCreationTimestamp="2026-01-27 07:54:07 +0000 UTC" firstStartedPulling="2026-01-27 07:54:10.032516346 +0000 UTC m=+155.684885719" lastFinishedPulling="2026-01-27 07:54:46.748482799 +0000 UTC m=+192.400838291" observedRunningTime="2026-01-27 07:54:47.465442718 +0000 UTC m=+193.117798220" watchObservedRunningTime="2026-01-27 07:54:47.469662484 +0000 UTC m=+193.122017986" Jan 27 07:54:47 crc kubenswrapper[4787]: W0127 07:54:47.476202 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6381e89b_99e4_480e_9ab0_14a99d22b889.slice/crio-d01da66c31038ba9bcead586f78d183db79fbd103a10f03106ce884e28edd5a4 WatchSource:0}: Error finding container d01da66c31038ba9bcead586f78d183db79fbd103a10f03106ce884e28edd5a4: Status 404 returned error can't find the container with id d01da66c31038ba9bcead586f78d183db79fbd103a10f03106ce884e28edd5a4 Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.558947 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:47 crc kubenswrapper[4787]: I0127 07:54:47.785311 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wdppr"] Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.231911 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.232467 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.386017 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.467212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" event={"ID":"6381e89b-99e4-480e-9ab0-14a99d22b889","Type":"ContainerStarted","Data":"ab12b522093957a0418fad02bce7b15c7c854402c69a9e25ea60cea48243dfae"} Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.467775 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" event={"ID":"6381e89b-99e4-480e-9ab0-14a99d22b889","Type":"ContainerStarted","Data":"d01da66c31038ba9bcead586f78d183db79fbd103a10f03106ce884e28edd5a4"} Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.467799 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.473415 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dd23d06a-7017-4d11-a361-c556d5465959","Type":"ContainerStarted","Data":"e2f97ea5734c0c010856c4bb35c11965ffb4340ae9d1a8cd3f54f20186f9c38d"} Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.487720 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" podStartSLOduration=6.487693709 podStartE2EDuration="6.487693709s" podCreationTimestamp="2026-01-27 07:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:48.485912448 +0000 UTC m=+194.138267950" watchObservedRunningTime="2026-01-27 07:54:48.487693709 +0000 UTC m=+194.140049201" Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.506108 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:54:48 crc kubenswrapper[4787]: I0127 07:54:48.506812 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hz9l2" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="registry-server" probeResult="failure" output=< Jan 27 07:54:48 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 27 07:54:48 crc kubenswrapper[4787]: > Jan 27 07:54:49 crc kubenswrapper[4787]: I0127 07:54:49.296408 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjd6t" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="registry-server" probeResult="failure" output=< Jan 27 07:54:49 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 27 07:54:49 crc kubenswrapper[4787]: > Jan 27 07:54:49 crc kubenswrapper[4787]: I0127 07:54:49.500049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dd23d06a-7017-4d11-a361-c556d5465959","Type":"ContainerStarted","Data":"ef83b7d239fca38c8168a0c9a3d2feda0dca8b890e19aa7d945f7a660543828b"} Jan 27 07:54:49 crc kubenswrapper[4787]: I0127 07:54:49.524251 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.524225777 podStartE2EDuration="2.524225777s" podCreationTimestamp="2026-01-27 07:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:49.522456157 +0000 UTC m=+195.174811639" watchObservedRunningTime="2026-01-27 07:54:49.524225777 +0000 UTC m=+195.176581279" Jan 27 07:54:50 crc kubenswrapper[4787]: I0127 07:54:50.511292 4787 generic.go:334] "Generic (PLEG): container finished" podID="dd23d06a-7017-4d11-a361-c556d5465959" containerID="ef83b7d239fca38c8168a0c9a3d2feda0dca8b890e19aa7d945f7a660543828b" exitCode=0 Jan 27 07:54:50 crc kubenswrapper[4787]: I0127 07:54:50.511523 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dd23d06a-7017-4d11-a361-c556d5465959","Type":"ContainerDied","Data":"ef83b7d239fca38c8168a0c9a3d2feda0dca8b890e19aa7d945f7a660543828b"} Jan 27 07:54:50 crc kubenswrapper[4787]: I0127 07:54:50.514321 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht6h9" event={"ID":"624e9a13-c9c5-4ef3-8628-056bfc65338b","Type":"ContainerStarted","Data":"ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498"} Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.521201 4787 generic.go:334] "Generic (PLEG): container finished" podID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerID="ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498" exitCode=0 Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.521421 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht6h9" event={"ID":"624e9a13-c9c5-4ef3-8628-056bfc65338b","Type":"ContainerDied","Data":"ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498"} Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.805760 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.851206 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd23d06a-7017-4d11-a361-c556d5465959-kube-api-access\") pod \"dd23d06a-7017-4d11-a361-c556d5465959\" (UID: \"dd23d06a-7017-4d11-a361-c556d5465959\") " Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.851281 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd23d06a-7017-4d11-a361-c556d5465959-kubelet-dir\") pod \"dd23d06a-7017-4d11-a361-c556d5465959\" (UID: \"dd23d06a-7017-4d11-a361-c556d5465959\") " Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.854470 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd23d06a-7017-4d11-a361-c556d5465959-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd23d06a-7017-4d11-a361-c556d5465959" (UID: "dd23d06a-7017-4d11-a361-c556d5465959"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.859172 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd23d06a-7017-4d11-a361-c556d5465959-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd23d06a-7017-4d11-a361-c556d5465959" (UID: "dd23d06a-7017-4d11-a361-c556d5465959"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.953144 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd23d06a-7017-4d11-a361-c556d5465959-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:51 crc kubenswrapper[4787]: I0127 07:54:51.953193 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd23d06a-7017-4d11-a361-c556d5465959-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:54:52 crc kubenswrapper[4787]: I0127 07:54:52.528802 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:54:52 crc kubenswrapper[4787]: I0127 07:54:52.529298 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dd23d06a-7017-4d11-a361-c556d5465959","Type":"ContainerDied","Data":"e2f97ea5734c0c010856c4bb35c11965ffb4340ae9d1a8cd3f54f20186f9c38d"} Jan 27 07:54:52 crc kubenswrapper[4787]: I0127 07:54:52.529338 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f97ea5734c0c010856c4bb35c11965ffb4340ae9d1a8cd3f54f20186f9c38d" Jan 27 07:54:52 crc kubenswrapper[4787]: I0127 07:54:52.532256 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht6h9" event={"ID":"624e9a13-c9c5-4ef3-8628-056bfc65338b","Type":"ContainerStarted","Data":"c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864"} Jan 27 07:54:52 crc kubenswrapper[4787]: I0127 07:54:52.552810 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ht6h9" podStartSLOduration=2.463139923 podStartE2EDuration="46.552777191s" podCreationTimestamp="2026-01-27 07:54:06 +0000 UTC" firstStartedPulling="2026-01-27 07:54:07.871092545 +0000 UTC m=+153.523448037" lastFinishedPulling="2026-01-27 07:54:51.960729813 +0000 UTC m=+197.613085305" observedRunningTime="2026-01-27 07:54:52.549216621 +0000 UTC m=+198.201572123" watchObservedRunningTime="2026-01-27 07:54:52.552777191 +0000 UTC m=+198.205132683" Jan 27 07:54:52 crc kubenswrapper[4787]: I0127 07:54:52.823337 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:54:52 crc kubenswrapper[4787]: I0127 07:54:52.823408 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.167669 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:54:54 crc kubenswrapper[4787]: E0127 07:54:54.168623 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd23d06a-7017-4d11-a361-c556d5465959" containerName="pruner" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.168645 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd23d06a-7017-4d11-a361-c556d5465959" containerName="pruner" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.168778 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd23d06a-7017-4d11-a361-c556d5465959" containerName="pruner" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.169348 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.172693 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.172730 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.181365 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.288056 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d78a135f-a7de-4bb7-b2aa-168fb353658f-kube-api-access\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.288139 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.288183 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-var-lock\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.389279 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.389372 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-var-lock\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.389426 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d78a135f-a7de-4bb7-b2aa-168fb353658f-kube-api-access\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.389419 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.389509 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-var-lock\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.412706 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d78a135f-a7de-4bb7-b2aa-168fb353658f-kube-api-access\") pod \"installer-9-crc\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.497377 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.767267 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.767925 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.842333 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:54 crc kubenswrapper[4787]: I0127 07:54:54.942586 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:54:55 crc kubenswrapper[4787]: I0127 07:54:55.071197 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:55 crc kubenswrapper[4787]: I0127 07:54:55.071687 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:55 crc kubenswrapper[4787]: I0127 07:54:55.135211 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:55 crc kubenswrapper[4787]: I0127 07:54:55.550332 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d78a135f-a7de-4bb7-b2aa-168fb353658f","Type":"ContainerStarted","Data":"c5a010ebcd473be981facf7a933739c86185647f5b265aa8a4c5d4030136ffd4"} Jan 27 07:54:55 crc kubenswrapper[4787]: I0127 07:54:55.591854 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:55 crc kubenswrapper[4787]: I0127 07:54:55.591932 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:54:56 crc kubenswrapper[4787]: I0127 07:54:56.558251 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d78a135f-a7de-4bb7-b2aa-168fb353658f","Type":"ContainerStarted","Data":"86b21223e074e0eb982b5516ae2faf73f9fccba44d21367b5f9b9319a1d62acc"} Jan 27 07:54:56 crc kubenswrapper[4787]: I0127 07:54:56.607802 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.607770998 podStartE2EDuration="2.607770998s" podCreationTimestamp="2026-01-27 07:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:54:56.601059534 +0000 UTC m=+202.253415066" watchObservedRunningTime="2026-01-27 07:54:56.607770998 +0000 UTC m=+202.260126530" Jan 27 07:54:56 crc kubenswrapper[4787]: I0127 07:54:56.845094 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:56 crc kubenswrapper[4787]: I0127 07:54:56.845175 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:56 crc kubenswrapper[4787]: I0127 07:54:56.889620 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:57 crc kubenswrapper[4787]: I0127 07:54:57.333258 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:57 crc kubenswrapper[4787]: I0127 07:54:57.382334 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:54:57 crc kubenswrapper[4787]: I0127 07:54:57.615175 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:54:57 crc kubenswrapper[4787]: I0127 07:54:57.969664 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-672cg"] Jan 27 07:54:58 crc kubenswrapper[4787]: I0127 07:54:58.286436 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:58 crc kubenswrapper[4787]: I0127 07:54:58.343999 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:54:58 crc kubenswrapper[4787]: I0127 07:54:58.570350 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-672cg" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerName="registry-server" containerID="cri-o://be7ffa9db32f83f23699c5d5677bd9eb9428eb33ca38c00529df07ae62a31547" gracePeriod=2 Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.368257 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9l2"] Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.368582 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hz9l2" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="registry-server" containerID="cri-o://92972139bf2205651890b39b93a8107ed20a0a61ec1230b6775bffc306789804" gracePeriod=2 Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.591634 4787 generic.go:334] "Generic (PLEG): container finished" podID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerID="be7ffa9db32f83f23699c5d5677bd9eb9428eb33ca38c00529df07ae62a31547" exitCode=0 Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.591691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-672cg" event={"ID":"2a1ecf2a-6b91-48d7-873e-918dd4e045fc","Type":"ContainerDied","Data":"be7ffa9db32f83f23699c5d5677bd9eb9428eb33ca38c00529df07ae62a31547"} Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.594179 4787 generic.go:334] "Generic (PLEG): container finished" podID="0368ea79-94a8-42e3-8986-dddbec83d755" containerID="4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9" exitCode=0 Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.594262 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgwgp" event={"ID":"0368ea79-94a8-42e3-8986-dddbec83d755","Type":"ContainerDied","Data":"4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9"} Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.600908 4787 generic.go:334] "Generic (PLEG): container finished" podID="44cc5af2-0636-4589-a988-e7e32bfea075" containerID="0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d" exitCode=0 Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.600952 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxzl" event={"ID":"44cc5af2-0636-4589-a988-e7e32bfea075","Type":"ContainerDied","Data":"0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d"} Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.865346 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.990579 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-catalog-content\") pod \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.990675 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-utilities\") pod \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.990802 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqrrx\" (UniqueName: \"kubernetes.io/projected/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-kube-api-access-bqrrx\") pod \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\" (UID: \"2a1ecf2a-6b91-48d7-873e-918dd4e045fc\") " Jan 27 07:54:59 crc kubenswrapper[4787]: I0127 07:54:59.992948 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-utilities" (OuterVolumeSpecName: "utilities") pod "2a1ecf2a-6b91-48d7-873e-918dd4e045fc" (UID: "2a1ecf2a-6b91-48d7-873e-918dd4e045fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.001338 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-kube-api-access-bqrrx" (OuterVolumeSpecName: "kube-api-access-bqrrx") pod "2a1ecf2a-6b91-48d7-873e-918dd4e045fc" (UID: "2a1ecf2a-6b91-48d7-873e-918dd4e045fc"). InnerVolumeSpecName "kube-api-access-bqrrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.046295 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a1ecf2a-6b91-48d7-873e-918dd4e045fc" (UID: "2a1ecf2a-6b91-48d7-873e-918dd4e045fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.092326 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqrrx\" (UniqueName: \"kubernetes.io/projected/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-kube-api-access-bqrrx\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.092364 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.092374 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a1ecf2a-6b91-48d7-873e-918dd4e045fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.610315 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-672cg" event={"ID":"2a1ecf2a-6b91-48d7-873e-918dd4e045fc","Type":"ContainerDied","Data":"b8aaeac82942157bbd1bdc8b6aa3df78bb8abe8a0a411a20aacb7836a96b1339"} Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.610749 4787 scope.go:117] "RemoveContainer" containerID="be7ffa9db32f83f23699c5d5677bd9eb9428eb33ca38c00529df07ae62a31547" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.610690 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-672cg" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.616449 4787 generic.go:334] "Generic (PLEG): container finished" podID="264daa3b-f5d9-407d-8835-35845525329a" containerID="92972139bf2205651890b39b93a8107ed20a0a61ec1230b6775bffc306789804" exitCode=0 Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.616499 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9l2" event={"ID":"264daa3b-f5d9-407d-8835-35845525329a","Type":"ContainerDied","Data":"92972139bf2205651890b39b93a8107ed20a0a61ec1230b6775bffc306789804"} Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.648067 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-672cg"] Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.651412 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-672cg"] Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.658688 4787 scope.go:117] "RemoveContainer" containerID="e44397098e422e87e1da7b248b23d5f52f1cfcc38a65342a95ecf9164e6322e7" Jan 27 07:55:00 crc kubenswrapper[4787]: I0127 07:55:00.681123 4787 scope.go:117] "RemoveContainer" containerID="01d251eebd7c2252e3ae9b7ab01cffa2fc304155551715934787979ebd43ab3e" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.086840 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" path="/var/lib/kubelet/pods/2a1ecf2a-6b91-48d7-873e-918dd4e045fc/volumes" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.512521 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.624356 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9l2" event={"ID":"264daa3b-f5d9-407d-8835-35845525329a","Type":"ContainerDied","Data":"34b029af2088374d8be804c9373b55e19265a67e5ebece55f9fd4a6ba2ab1d08"} Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.624874 4787 scope.go:117] "RemoveContainer" containerID="92972139bf2205651890b39b93a8107ed20a0a61ec1230b6775bffc306789804" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.624452 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9l2" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.635080 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-catalog-content\") pod \"264daa3b-f5d9-407d-8835-35845525329a\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.635126 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-utilities\") pod \"264daa3b-f5d9-407d-8835-35845525329a\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.635159 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65qxh\" (UniqueName: \"kubernetes.io/projected/264daa3b-f5d9-407d-8835-35845525329a-kube-api-access-65qxh\") pod \"264daa3b-f5d9-407d-8835-35845525329a\" (UID: \"264daa3b-f5d9-407d-8835-35845525329a\") " Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.636640 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-utilities" (OuterVolumeSpecName: "utilities") pod "264daa3b-f5d9-407d-8835-35845525329a" (UID: "264daa3b-f5d9-407d-8835-35845525329a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.643722 4787 scope.go:117] "RemoveContainer" containerID="1b10219e013f243bd9fe375bd99faab099e08f41d538f33974e456fa1220d7b7" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.644181 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264daa3b-f5d9-407d-8835-35845525329a-kube-api-access-65qxh" (OuterVolumeSpecName: "kube-api-access-65qxh") pod "264daa3b-f5d9-407d-8835-35845525329a" (UID: "264daa3b-f5d9-407d-8835-35845525329a"). InnerVolumeSpecName "kube-api-access-65qxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.663202 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "264daa3b-f5d9-407d-8835-35845525329a" (UID: "264daa3b-f5d9-407d-8835-35845525329a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.670092 4787 scope.go:117] "RemoveContainer" containerID="fc80e896d018112629afc7413a0e19d17cf029077e3a9fbb67d67c7d973bf040" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.736042 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.736098 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264daa3b-f5d9-407d-8835-35845525329a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.736115 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65qxh\" (UniqueName: \"kubernetes.io/projected/264daa3b-f5d9-407d-8835-35845525329a-kube-api-access-65qxh\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.775222 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjd6t"] Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.775731 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qjd6t" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="registry-server" containerID="cri-o://273c3736c041c8021a3e5eb647bdda5506773181a199572370f5be6b85004a04" gracePeriod=2 Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.961682 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9l2"] Jan 27 07:55:01 crc kubenswrapper[4787]: I0127 07:55:01.964998 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9l2"] Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.413146 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7598866b8d-sz455"] Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.413471 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" podUID="1a623b08-d78c-4d91-bff0-6535d713eb2c" containerName="controller-manager" containerID="cri-o://ec5d5918617f4cf77b4f25c6da400fa3ea9bb3863c461eac264ef2c1e57344ea" gracePeriod=30 Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.437978 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl"] Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.438647 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" podUID="6381e89b-99e4-480e-9ab0-14a99d22b889" containerName="route-controller-manager" containerID="cri-o://ab12b522093957a0418fad02bce7b15c7c854402c69a9e25ea60cea48243dfae" gracePeriod=30 Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.671518 4787 generic.go:334] "Generic (PLEG): container finished" podID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerID="273c3736c041c8021a3e5eb647bdda5506773181a199572370f5be6b85004a04" exitCode=0 Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.672171 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjd6t" event={"ID":"5f5e46e1-1252-4a0a-ad11-8d4236f3759b","Type":"ContainerDied","Data":"273c3736c041c8021a3e5eb647bdda5506773181a199572370f5be6b85004a04"} Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.675909 4787 generic.go:334] "Generic (PLEG): container finished" podID="1a623b08-d78c-4d91-bff0-6535d713eb2c" containerID="ec5d5918617f4cf77b4f25c6da400fa3ea9bb3863c461eac264ef2c1e57344ea" exitCode=0 Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.676020 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" event={"ID":"1a623b08-d78c-4d91-bff0-6535d713eb2c","Type":"ContainerDied","Data":"ec5d5918617f4cf77b4f25c6da400fa3ea9bb3863c461eac264ef2c1e57344ea"} Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.680544 4787 generic.go:334] "Generic (PLEG): container finished" podID="6381e89b-99e4-480e-9ab0-14a99d22b889" containerID="ab12b522093957a0418fad02bce7b15c7c854402c69a9e25ea60cea48243dfae" exitCode=0 Jan 27 07:55:02 crc kubenswrapper[4787]: I0127 07:55:02.680612 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" event={"ID":"6381e89b-99e4-480e-9ab0-14a99d22b889","Type":"ContainerDied","Data":"ab12b522093957a0418fad02bce7b15c7c854402c69a9e25ea60cea48243dfae"} Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.083516 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264daa3b-f5d9-407d-8835-35845525329a" path="/var/lib/kubelet/pods/264daa3b-f5d9-407d-8835-35845525329a/volumes" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.123016 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.129345 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.133152 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.162734 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qplk2\" (UniqueName: \"kubernetes.io/projected/6381e89b-99e4-480e-9ab0-14a99d22b889-kube-api-access-qplk2\") pod \"6381e89b-99e4-480e-9ab0-14a99d22b889\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.162812 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-config\") pod \"1a623b08-d78c-4d91-bff0-6535d713eb2c\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.162879 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-client-ca\") pod \"6381e89b-99e4-480e-9ab0-14a99d22b889\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.162991 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-config\") pod \"6381e89b-99e4-480e-9ab0-14a99d22b889\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.163034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-catalog-content\") pod \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.163058 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-proxy-ca-bundles\") pod \"1a623b08-d78c-4d91-bff0-6535d713eb2c\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.163652 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-client-ca" (OuterVolumeSpecName: "client-ca") pod "6381e89b-99e4-480e-9ab0-14a99d22b889" (UID: "6381e89b-99e4-480e-9ab0-14a99d22b889"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.163874 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-config" (OuterVolumeSpecName: "config") pod "6381e89b-99e4-480e-9ab0-14a99d22b889" (UID: "6381e89b-99e4-480e-9ab0-14a99d22b889"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.164333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1a623b08-d78c-4d91-bff0-6535d713eb2c" (UID: "1a623b08-d78c-4d91-bff0-6535d713eb2c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.164407 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-config" (OuterVolumeSpecName: "config") pod "1a623b08-d78c-4d91-bff0-6535d713eb2c" (UID: "1a623b08-d78c-4d91-bff0-6535d713eb2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.164796 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dfzs\" (UniqueName: \"kubernetes.io/projected/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-kube-api-access-8dfzs\") pod \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.164836 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-client-ca\") pod \"1a623b08-d78c-4d91-bff0-6535d713eb2c\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.164897 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6381e89b-99e4-480e-9ab0-14a99d22b889-serving-cert\") pod \"6381e89b-99e4-480e-9ab0-14a99d22b889\" (UID: \"6381e89b-99e4-480e-9ab0-14a99d22b889\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.164965 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-utilities\") pod \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\" (UID: \"5f5e46e1-1252-4a0a-ad11-8d4236f3759b\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.164990 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6dnj\" (UniqueName: \"kubernetes.io/projected/1a623b08-d78c-4d91-bff0-6535d713eb2c-kube-api-access-z6dnj\") pod \"1a623b08-d78c-4d91-bff0-6535d713eb2c\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.165049 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a623b08-d78c-4d91-bff0-6535d713eb2c-serving-cert\") pod \"1a623b08-d78c-4d91-bff0-6535d713eb2c\" (UID: \"1a623b08-d78c-4d91-bff0-6535d713eb2c\") " Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.165407 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.165444 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6381e89b-99e4-480e-9ab0-14a99d22b889-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.167027 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-utilities" (OuterVolumeSpecName: "utilities") pod "5f5e46e1-1252-4a0a-ad11-8d4236f3759b" (UID: "5f5e46e1-1252-4a0a-ad11-8d4236f3759b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.167495 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a623b08-d78c-4d91-bff0-6535d713eb2c" (UID: "1a623b08-d78c-4d91-bff0-6535d713eb2c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.198506 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a623b08-d78c-4d91-bff0-6535d713eb2c-kube-api-access-z6dnj" (OuterVolumeSpecName: "kube-api-access-z6dnj") pod "1a623b08-d78c-4d91-bff0-6535d713eb2c" (UID: "1a623b08-d78c-4d91-bff0-6535d713eb2c"). InnerVolumeSpecName "kube-api-access-z6dnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.198529 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a623b08-d78c-4d91-bff0-6535d713eb2c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a623b08-d78c-4d91-bff0-6535d713eb2c" (UID: "1a623b08-d78c-4d91-bff0-6535d713eb2c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.198604 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6381e89b-99e4-480e-9ab0-14a99d22b889-kube-api-access-qplk2" (OuterVolumeSpecName: "kube-api-access-qplk2") pod "6381e89b-99e4-480e-9ab0-14a99d22b889" (UID: "6381e89b-99e4-480e-9ab0-14a99d22b889"). InnerVolumeSpecName "kube-api-access-qplk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.198610 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-kube-api-access-8dfzs" (OuterVolumeSpecName: "kube-api-access-8dfzs") pod "5f5e46e1-1252-4a0a-ad11-8d4236f3759b" (UID: "5f5e46e1-1252-4a0a-ad11-8d4236f3759b"). InnerVolumeSpecName "kube-api-access-8dfzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.198515 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6381e89b-99e4-480e-9ab0-14a99d22b889-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6381e89b-99e4-480e-9ab0-14a99d22b889" (UID: "6381e89b-99e4-480e-9ab0-14a99d22b889"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.266919 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6dnj\" (UniqueName: \"kubernetes.io/projected/1a623b08-d78c-4d91-bff0-6535d713eb2c-kube-api-access-z6dnj\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.266967 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a623b08-d78c-4d91-bff0-6535d713eb2c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.266980 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qplk2\" (UniqueName: \"kubernetes.io/projected/6381e89b-99e4-480e-9ab0-14a99d22b889-kube-api-access-qplk2\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.266992 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.267001 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.267010 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dfzs\" (UniqueName: \"kubernetes.io/projected/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-kube-api-access-8dfzs\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.267024 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a623b08-d78c-4d91-bff0-6535d713eb2c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.267038 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6381e89b-99e4-480e-9ab0-14a99d22b889-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.267050 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.322763 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f5e46e1-1252-4a0a-ad11-8d4236f3759b" (UID: "5f5e46e1-1252-4a0a-ad11-8d4236f3759b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.368351 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5e46e1-1252-4a0a-ad11-8d4236f3759b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.697972 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgwgp" event={"ID":"0368ea79-94a8-42e3-8986-dddbec83d755","Type":"ContainerStarted","Data":"40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2"} Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.700210 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.700390 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl" event={"ID":"6381e89b-99e4-480e-9ab0-14a99d22b889","Type":"ContainerDied","Data":"d01da66c31038ba9bcead586f78d183db79fbd103a10f03106ce884e28edd5a4"} Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.700794 4787 scope.go:117] "RemoveContainer" containerID="ab12b522093957a0418fad02bce7b15c7c854402c69a9e25ea60cea48243dfae" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.706357 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxzl" event={"ID":"44cc5af2-0636-4589-a988-e7e32bfea075","Type":"ContainerStarted","Data":"e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223"} Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.718089 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjd6t" event={"ID":"5f5e46e1-1252-4a0a-ad11-8d4236f3759b","Type":"ContainerDied","Data":"a7fc73f366dad2ee2177e45cec1c07c9411e1ae51b5099acd00178407f132bd0"} Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.718162 4787 scope.go:117] "RemoveContainer" containerID="273c3736c041c8021a3e5eb647bdda5506773181a199572370f5be6b85004a04" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.718283 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjd6t" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.726643 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgwgp" podStartSLOduration=4.059454039 podStartE2EDuration="59.726612456s" podCreationTimestamp="2026-01-27 07:54:04 +0000 UTC" firstStartedPulling="2026-01-27 07:54:06.833457422 +0000 UTC m=+152.485812924" lastFinishedPulling="2026-01-27 07:55:02.500615849 +0000 UTC m=+208.152971341" observedRunningTime="2026-01-27 07:55:03.718754834 +0000 UTC m=+209.371110346" watchObservedRunningTime="2026-01-27 07:55:03.726612456 +0000 UTC m=+209.378967948" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.728536 4787 generic.go:334] "Generic (PLEG): container finished" podID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerID="aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd" exitCode=0 Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.728682 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkjnl" event={"ID":"1e42281e-20ed-4105-866f-878ffbf6c6eb","Type":"ContainerDied","Data":"aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd"} Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.752160 4787 scope.go:117] "RemoveContainer" containerID="1309a0eb542609f82174c456dbe308ca94a8f5093a7b41ff79e5822733bbd396" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.756319 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" event={"ID":"1a623b08-d78c-4d91-bff0-6535d713eb2c","Type":"ContainerDied","Data":"3da2e262993ac94cc9dc72bad86247ba0291e69ae87e9695f743aab86626a5a0"} Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.756441 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7598866b8d-sz455" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.772249 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tnxzl" podStartSLOduration=3.994312203 podStartE2EDuration="59.772210335s" podCreationTimestamp="2026-01-27 07:54:04 +0000 UTC" firstStartedPulling="2026-01-27 07:54:06.839814408 +0000 UTC m=+152.492169900" lastFinishedPulling="2026-01-27 07:55:02.61771253 +0000 UTC m=+208.270068032" observedRunningTime="2026-01-27 07:55:03.752859186 +0000 UTC m=+209.405214698" watchObservedRunningTime="2026-01-27 07:55:03.772210335 +0000 UTC m=+209.424565837" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.790981 4787 scope.go:117] "RemoveContainer" containerID="e485664fbc4a751f0a85db31317555b712de9d281e0fb607d7c774fccc36329f" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.813276 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.815794 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774cb7c4cb-j9mkl"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.825804 4787 scope.go:117] "RemoveContainer" containerID="ec5d5918617f4cf77b4f25c6da400fa3ea9bb3863c461eac264ef2c1e57344ea" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.837366 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr"] Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.837681 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="extract-content" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.837844 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="extract-content" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.837892 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="extract-content" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.837903 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="extract-content" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.837919 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerName="extract-utilities" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.837929 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerName="extract-utilities" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.837939 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerName="extract-content" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.837947 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerName="extract-content" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.837956 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6381e89b-99e4-480e-9ab0-14a99d22b889" containerName="route-controller-manager" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.837965 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6381e89b-99e4-480e-9ab0-14a99d22b889" containerName="route-controller-manager" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.837979 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.837989 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.838003 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838011 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.838021 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="extract-utilities" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838030 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="extract-utilities" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.838040 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838174 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.838233 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="extract-utilities" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838244 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="extract-utilities" Jan 27 07:55:03 crc kubenswrapper[4787]: E0127 07:55:03.838254 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a623b08-d78c-4d91-bff0-6535d713eb2c" containerName="controller-manager" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838262 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a623b08-d78c-4d91-bff0-6535d713eb2c" containerName="controller-manager" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838387 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a623b08-d78c-4d91-bff0-6535d713eb2c" containerName="controller-manager" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838405 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6381e89b-99e4-480e-9ab0-14a99d22b889" containerName="route-controller-manager" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838414 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838422 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1ecf2a-6b91-48d7-873e-918dd4e045fc" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838430 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="264daa3b-f5d9-407d-8835-35845525329a" containerName="registry-server" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.838897 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.844086 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.844335 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.844462 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.847060 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-777857c966-2h66n"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.848173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.848392 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.848521 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.852156 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.853504 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7598866b8d-sz455"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.854089 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.855806 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.856291 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7598866b8d-sz455"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.858902 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.859140 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.862094 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.864417 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-777857c966-2h66n"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.864831 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.865311 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.866603 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.870490 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjd6t"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.876972 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-proxy-ca-bundles\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.877023 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7bl5\" (UniqueName: \"kubernetes.io/projected/816883bf-afa6-47b8-b823-7603ebf5cc71-kube-api-access-s7bl5\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.877049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816883bf-afa6-47b8-b823-7603ebf5cc71-serving-cert\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.877073 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-client-ca\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.877110 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-client-ca\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.877729 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-config\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.877758 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe0585-ad09-4585-9b4c-01698e2c295b-serving-cert\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.878035 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qjd6t"] Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.879670 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5xw\" (UniqueName: \"kubernetes.io/projected/17fe0585-ad09-4585-9b4c-01698e2c295b-kube-api-access-9b5xw\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.879753 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-config\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981185 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-client-ca\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981250 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-config\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981281 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe0585-ad09-4585-9b4c-01698e2c295b-serving-cert\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981314 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b5xw\" (UniqueName: \"kubernetes.io/projected/17fe0585-ad09-4585-9b4c-01698e2c295b-kube-api-access-9b5xw\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-config\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-proxy-ca-bundles\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981406 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7bl5\" (UniqueName: \"kubernetes.io/projected/816883bf-afa6-47b8-b823-7603ebf5cc71-kube-api-access-s7bl5\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981426 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816883bf-afa6-47b8-b823-7603ebf5cc71-serving-cert\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.981451 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-client-ca\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.982508 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-client-ca\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.983415 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-proxy-ca-bundles\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.983966 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-client-ca\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.983959 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-config\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.984503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-config\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.989366 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816883bf-afa6-47b8-b823-7603ebf5cc71-serving-cert\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:03 crc kubenswrapper[4787]: I0127 07:55:03.989435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe0585-ad09-4585-9b4c-01698e2c295b-serving-cert\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.000230 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b5xw\" (UniqueName: \"kubernetes.io/projected/17fe0585-ad09-4585-9b4c-01698e2c295b-kube-api-access-9b5xw\") pod \"route-controller-manager-659f59795d-4hwcr\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.001955 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7bl5\" (UniqueName: \"kubernetes.io/projected/816883bf-afa6-47b8-b823-7603ebf5cc71-kube-api-access-s7bl5\") pod \"controller-manager-777857c966-2h66n\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.170393 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.178170 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.516266 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-777857c966-2h66n"] Jan 27 07:55:04 crc kubenswrapper[4787]: W0127 07:55:04.524876 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod816883bf_afa6_47b8_b823_7603ebf5cc71.slice/crio-c09f056e6b522f051bebfcf76d79e3302d7058b741907e39732f827b109cc806 WatchSource:0}: Error finding container c09f056e6b522f051bebfcf76d79e3302d7058b741907e39732f827b109cc806: Status 404 returned error can't find the container with id c09f056e6b522f051bebfcf76d79e3302d7058b741907e39732f827b109cc806 Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.659879 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr"] Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.784327 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" event={"ID":"17fe0585-ad09-4585-9b4c-01698e2c295b","Type":"ContainerStarted","Data":"5d2b198f58bbf5e2fc1127f374689f197db99d090e6ffaf6f08fba9b1669dc76"} Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.787816 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" event={"ID":"816883bf-afa6-47b8-b823-7603ebf5cc71","Type":"ContainerStarted","Data":"5f1d64b82d7b846232716736a494d6727d50dc1adacd62ef1ab1de361e5c3498"} Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.787861 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" event={"ID":"816883bf-afa6-47b8-b823-7603ebf5cc71","Type":"ContainerStarted","Data":"c09f056e6b522f051bebfcf76d79e3302d7058b741907e39732f827b109cc806"} Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.788388 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.789701 4787 patch_prober.go:28] interesting pod/controller-manager-777857c966-2h66n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.789747 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" podUID="816883bf-afa6-47b8-b823-7603ebf5cc71" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.822341 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" podStartSLOduration=2.822309154 podStartE2EDuration="2.822309154s" podCreationTimestamp="2026-01-27 07:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:55:04.819161981 +0000 UTC m=+210.471517483" watchObservedRunningTime="2026-01-27 07:55:04.822309154 +0000 UTC m=+210.474664646" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.920162 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:55:04 crc kubenswrapper[4787]: I0127 07:55:04.921333 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.085330 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a623b08-d78c-4d91-bff0-6535d713eb2c" path="/var/lib/kubelet/pods/1a623b08-d78c-4d91-bff0-6535d713eb2c/volumes" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.086509 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5e46e1-1252-4a0a-ad11-8d4236f3759b" path="/var/lib/kubelet/pods/5f5e46e1-1252-4a0a-ad11-8d4236f3759b/volumes" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.087447 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6381e89b-99e4-480e-9ab0-14a99d22b889" path="/var/lib/kubelet/pods/6381e89b-99e4-480e-9ab0-14a99d22b889/volumes" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.549425 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.550064 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.608456 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.797266 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkjnl" event={"ID":"1e42281e-20ed-4105-866f-878ffbf6c6eb","Type":"ContainerStarted","Data":"8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84"} Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.799411 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" event={"ID":"17fe0585-ad09-4585-9b4c-01698e2c295b","Type":"ContainerStarted","Data":"dd74d0a47c882efd924ba6fb867cda61c83e9bb61f03bbd8c3f4c56cbfb22222"} Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.800749 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.805362 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.811881 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.816434 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kkjnl" podStartSLOduration=3.043845194 podStartE2EDuration="58.816407415s" podCreationTimestamp="2026-01-27 07:54:07 +0000 UTC" firstStartedPulling="2026-01-27 07:54:08.935723462 +0000 UTC m=+154.588078954" lastFinishedPulling="2026-01-27 07:55:04.708285683 +0000 UTC m=+210.360641175" observedRunningTime="2026-01-27 07:55:05.81463046 +0000 UTC m=+211.466985972" watchObservedRunningTime="2026-01-27 07:55:05.816407415 +0000 UTC m=+211.468762917" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.835897 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" podStartSLOduration=3.835865108 podStartE2EDuration="3.835865108s" podCreationTimestamp="2026-01-27 07:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:55:05.833378913 +0000 UTC m=+211.485734405" watchObservedRunningTime="2026-01-27 07:55:05.835865108 +0000 UTC m=+211.488220610" Jan 27 07:55:05 crc kubenswrapper[4787]: I0127 07:55:05.967838 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tnxzl" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="registry-server" probeResult="failure" output=< Jan 27 07:55:05 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 27 07:55:05 crc kubenswrapper[4787]: > Jan 27 07:55:07 crc kubenswrapper[4787]: I0127 07:55:07.840732 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:55:07 crc kubenswrapper[4787]: I0127 07:55:07.841171 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:55:08 crc kubenswrapper[4787]: I0127 07:55:08.884180 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kkjnl" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="registry-server" probeResult="failure" output=< Jan 27 07:55:08 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 27 07:55:08 crc kubenswrapper[4787]: > Jan 27 07:55:12 crc kubenswrapper[4787]: I0127 07:55:12.833340 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" podUID="fcc64739-2fd2-4413-a19e-0bc14dd883d6" containerName="oauth-openshift" containerID="cri-o://2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218" gracePeriod=15 Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.277448 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.321408 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-796668b4d5-vqtrj"] Jan 27 07:55:13 crc kubenswrapper[4787]: E0127 07:55:13.325704 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc64739-2fd2-4413-a19e-0bc14dd883d6" containerName="oauth-openshift" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.325750 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc64739-2fd2-4413-a19e-0bc14dd883d6" containerName="oauth-openshift" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.325954 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc64739-2fd2-4413-a19e-0bc14dd883d6" containerName="oauth-openshift" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.326504 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.335904 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-796668b4d5-vqtrj"] Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.368121 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-idp-0-file-data\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.368476 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-policies\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.368593 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-provider-selection\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.368742 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-ocp-branding-template\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.368857 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-router-certs\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.368941 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-session\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369080 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-dir\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369179 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-trusted-ca-bundle\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369355 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-serving-cert\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369476 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c54z8\" (UniqueName: \"kubernetes.io/projected/fcc64739-2fd2-4413-a19e-0bc14dd883d6-kube-api-access-c54z8\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369570 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-service-ca\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369663 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-cliconfig\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369749 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-login\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369927 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-error\") pod \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\" (UID: \"fcc64739-2fd2-4413-a19e-0bc14dd883d6\") " Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369670 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.369694 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.370176 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.370410 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.370447 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.376326 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.376590 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.377791 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.378047 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.378210 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.378504 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.378850 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.378913 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.382311 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc64739-2fd2-4413-a19e-0bc14dd883d6-kube-api-access-c54z8" (OuterVolumeSpecName: "kube-api-access-c54z8") pod "fcc64739-2fd2-4413-a19e-0bc14dd883d6" (UID: "fcc64739-2fd2-4413-a19e-0bc14dd883d6"). InnerVolumeSpecName "kube-api-access-c54z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.471601 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-login\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.471922 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-session\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472049 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472139 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdkw4\" (UniqueName: \"kubernetes.io/projected/0bd74928-5db8-49c1-ba2f-c8637ea86d46-kube-api-access-mdkw4\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472225 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472301 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0bd74928-5db8-49c1-ba2f-c8637ea86d46-audit-dir\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472400 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472499 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-router-certs\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472624 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-audit-policies\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472727 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-service-ca\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.472866 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-error\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473007 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-serving-cert\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473105 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-cliconfig\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473229 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473373 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473465 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473577 4787 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473675 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473744 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473809 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c54z8\" (UniqueName: \"kubernetes.io/projected/fcc64739-2fd2-4413-a19e-0bc14dd883d6-kube-api-access-c54z8\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473867 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473930 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.473994 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.474137 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.474273 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.474368 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fcc64739-2fd2-4413-a19e-0bc14dd883d6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.474454 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.474524 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fcc64739-2fd2-4413-a19e-0bc14dd883d6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576227 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576318 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdkw4\" (UniqueName: \"kubernetes.io/projected/0bd74928-5db8-49c1-ba2f-c8637ea86d46-kube-api-access-mdkw4\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576384 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576433 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576500 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0bd74928-5db8-49c1-ba2f-c8637ea86d46-audit-dir\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576591 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-router-certs\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576676 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-audit-policies\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576726 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-service-ca\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576795 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-error\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.576868 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-cliconfig\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.577791 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-audit-policies\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.577828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.577923 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0bd74928-5db8-49c1-ba2f-c8637ea86d46-audit-dir\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.579609 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-serving-cert\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.579671 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.579733 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-login\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.579759 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-session\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.580674 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.581189 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-service-ca\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.581499 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-cliconfig\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.585063 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-router-certs\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.585099 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-error\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.585216 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-template-login\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.585749 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.586310 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-session\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.587076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-system-serving-cert\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.588054 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0bd74928-5db8-49c1-ba2f-c8637ea86d46-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.593879 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdkw4\" (UniqueName: \"kubernetes.io/projected/0bd74928-5db8-49c1-ba2f-c8637ea86d46-kube-api-access-mdkw4\") pod \"oauth-openshift-796668b4d5-vqtrj\" (UID: \"0bd74928-5db8-49c1-ba2f-c8637ea86d46\") " pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.662596 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.852808 4787 generic.go:334] "Generic (PLEG): container finished" podID="fcc64739-2fd2-4413-a19e-0bc14dd883d6" containerID="2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218" exitCode=0 Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.852913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" event={"ID":"fcc64739-2fd2-4413-a19e-0bc14dd883d6","Type":"ContainerDied","Data":"2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218"} Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.852952 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" event={"ID":"fcc64739-2fd2-4413-a19e-0bc14dd883d6","Type":"ContainerDied","Data":"0586f60c3abf6a05bb8ec725bce60998408f815b2354ed56f05b7be6dbc58bdc"} Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.852971 4787 scope.go:117] "RemoveContainer" containerID="2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.853111 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wdppr" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.887639 4787 scope.go:117] "RemoveContainer" containerID="2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218" Jan 27 07:55:13 crc kubenswrapper[4787]: E0127 07:55:13.888901 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218\": container with ID starting with 2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218 not found: ID does not exist" containerID="2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.888983 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218"} err="failed to get container status \"2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218\": rpc error: code = NotFound desc = could not find container \"2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218\": container with ID starting with 2d766c390986fb12647d9296b2ff8a7a88b1618b438430370daaded86b506218 not found: ID does not exist" Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.898183 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wdppr"] Jan 27 07:55:13 crc kubenswrapper[4787]: I0127 07:55:13.905842 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wdppr"] Jan 27 07:55:14 crc kubenswrapper[4787]: I0127 07:55:14.134337 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-796668b4d5-vqtrj"] Jan 27 07:55:14 crc kubenswrapper[4787]: I0127 07:55:14.863613 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" event={"ID":"0bd74928-5db8-49c1-ba2f-c8637ea86d46","Type":"ContainerStarted","Data":"19459ce0cd0eaf9ca8584f7b28e2280652ac5238fb1a67badc08fb044a4438e3"} Jan 27 07:55:14 crc kubenswrapper[4787]: I0127 07:55:14.864370 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:14 crc kubenswrapper[4787]: I0127 07:55:14.864589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" event={"ID":"0bd74928-5db8-49c1-ba2f-c8637ea86d46","Type":"ContainerStarted","Data":"cc0d409f59f07f3cb4500b4109d68716eccc076c958596fc2e5b3bc1722c5080"} Jan 27 07:55:14 crc kubenswrapper[4787]: I0127 07:55:14.895828 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" podStartSLOduration=27.895793227 podStartE2EDuration="27.895793227s" podCreationTimestamp="2026-01-27 07:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:55:14.889270131 +0000 UTC m=+220.541625713" watchObservedRunningTime="2026-01-27 07:55:14.895793227 +0000 UTC m=+220.548148759" Jan 27 07:55:14 crc kubenswrapper[4787]: I0127 07:55:14.975428 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:55:15 crc kubenswrapper[4787]: I0127 07:55:15.048661 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:55:15 crc kubenswrapper[4787]: I0127 07:55:15.060962 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-796668b4d5-vqtrj" Jan 27 07:55:15 crc kubenswrapper[4787]: I0127 07:55:15.083565 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc64739-2fd2-4413-a19e-0bc14dd883d6" path="/var/lib/kubelet/pods/fcc64739-2fd2-4413-a19e-0bc14dd883d6/volumes" Jan 27 07:55:15 crc kubenswrapper[4787]: I0127 07:55:15.600792 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.225275 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgwgp"] Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.225505 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgwgp" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" containerName="registry-server" containerID="cri-o://40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2" gracePeriod=2 Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.756574 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.832633 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5r7l\" (UniqueName: \"kubernetes.io/projected/0368ea79-94a8-42e3-8986-dddbec83d755-kube-api-access-z5r7l\") pod \"0368ea79-94a8-42e3-8986-dddbec83d755\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.832883 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-catalog-content\") pod \"0368ea79-94a8-42e3-8986-dddbec83d755\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.832916 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-utilities\") pod \"0368ea79-94a8-42e3-8986-dddbec83d755\" (UID: \"0368ea79-94a8-42e3-8986-dddbec83d755\") " Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.834057 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-utilities" (OuterVolumeSpecName: "utilities") pod "0368ea79-94a8-42e3-8986-dddbec83d755" (UID: "0368ea79-94a8-42e3-8986-dddbec83d755"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.841815 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0368ea79-94a8-42e3-8986-dddbec83d755-kube-api-access-z5r7l" (OuterVolumeSpecName: "kube-api-access-z5r7l") pod "0368ea79-94a8-42e3-8986-dddbec83d755" (UID: "0368ea79-94a8-42e3-8986-dddbec83d755"). InnerVolumeSpecName "kube-api-access-z5r7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.891748 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0368ea79-94a8-42e3-8986-dddbec83d755" (UID: "0368ea79-94a8-42e3-8986-dddbec83d755"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.895802 4787 generic.go:334] "Generic (PLEG): container finished" podID="0368ea79-94a8-42e3-8986-dddbec83d755" containerID="40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2" exitCode=0 Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.895931 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgwgp" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.895928 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgwgp" event={"ID":"0368ea79-94a8-42e3-8986-dddbec83d755","Type":"ContainerDied","Data":"40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2"} Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.896024 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgwgp" event={"ID":"0368ea79-94a8-42e3-8986-dddbec83d755","Type":"ContainerDied","Data":"7a180c063d8433886b9b99fe8382a8d973c8b6b4d1425416261139533b679d98"} Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.896073 4787 scope.go:117] "RemoveContainer" containerID="40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.924660 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgwgp"] Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.924774 4787 scope.go:117] "RemoveContainer" containerID="4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.927020 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgwgp"] Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.934180 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.934209 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0368ea79-94a8-42e3-8986-dddbec83d755-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.934219 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5r7l\" (UniqueName: \"kubernetes.io/projected/0368ea79-94a8-42e3-8986-dddbec83d755-kube-api-access-z5r7l\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.951758 4787 scope.go:117] "RemoveContainer" containerID="da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.971060 4787 scope.go:117] "RemoveContainer" containerID="40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2" Jan 27 07:55:16 crc kubenswrapper[4787]: E0127 07:55:16.971514 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2\": container with ID starting with 40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2 not found: ID does not exist" containerID="40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.971588 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2"} err="failed to get container status \"40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2\": rpc error: code = NotFound desc = could not find container \"40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2\": container with ID starting with 40ccd79f42857c9cc5f59993f886e0ed52b8d538b0a585d795f0fbaf3b3c58f2 not found: ID does not exist" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.971613 4787 scope.go:117] "RemoveContainer" containerID="4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9" Jan 27 07:55:16 crc kubenswrapper[4787]: E0127 07:55:16.972100 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9\": container with ID starting with 4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9 not found: ID does not exist" containerID="4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.972126 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9"} err="failed to get container status \"4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9\": rpc error: code = NotFound desc = could not find container \"4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9\": container with ID starting with 4c62d29717da4e2dafd29f1d1052ba2a291d903a0149abf52208f5ae61b47bc9 not found: ID does not exist" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.972143 4787 scope.go:117] "RemoveContainer" containerID="da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d" Jan 27 07:55:16 crc kubenswrapper[4787]: E0127 07:55:16.972456 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d\": container with ID starting with da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d not found: ID does not exist" containerID="da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d" Jan 27 07:55:16 crc kubenswrapper[4787]: I0127 07:55:16.972479 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d"} err="failed to get container status \"da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d\": rpc error: code = NotFound desc = could not find container \"da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d\": container with ID starting with da0792067324438fea958c400a5fb011057b3895c2b4ec11d5379beb14d61d0d not found: ID does not exist" Jan 27 07:55:17 crc kubenswrapper[4787]: I0127 07:55:17.086991 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" path="/var/lib/kubelet/pods/0368ea79-94a8-42e3-8986-dddbec83d755/volumes" Jan 27 07:55:17 crc kubenswrapper[4787]: I0127 07:55:17.881232 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:55:17 crc kubenswrapper[4787]: I0127 07:55:17.927619 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.447204 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-777857c966-2h66n"] Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.447920 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" podUID="816883bf-afa6-47b8-b823-7603ebf5cc71" containerName="controller-manager" containerID="cri-o://5f1d64b82d7b846232716736a494d6727d50dc1adacd62ef1ab1de361e5c3498" gracePeriod=30 Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.531973 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr"] Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.532269 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" podUID="17fe0585-ad09-4585-9b4c-01698e2c295b" containerName="route-controller-manager" containerID="cri-o://dd74d0a47c882efd924ba6fb867cda61c83e9bb61f03bbd8c3f4c56cbfb22222" gracePeriod=30 Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.823485 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.823907 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.823984 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.824879 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748"} pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.824951 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" containerID="cri-o://8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748" gracePeriod=600 Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.954648 4787 generic.go:334] "Generic (PLEG): container finished" podID="17fe0585-ad09-4585-9b4c-01698e2c295b" containerID="dd74d0a47c882efd924ba6fb867cda61c83e9bb61f03bbd8c3f4c56cbfb22222" exitCode=0 Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.954745 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" event={"ID":"17fe0585-ad09-4585-9b4c-01698e2c295b","Type":"ContainerDied","Data":"dd74d0a47c882efd924ba6fb867cda61c83e9bb61f03bbd8c3f4c56cbfb22222"} Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.957725 4787 generic.go:334] "Generic (PLEG): container finished" podID="816883bf-afa6-47b8-b823-7603ebf5cc71" containerID="5f1d64b82d7b846232716736a494d6727d50dc1adacd62ef1ab1de361e5c3498" exitCode=0 Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.957795 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" event={"ID":"816883bf-afa6-47b8-b823-7603ebf5cc71","Type":"ContainerDied","Data":"5f1d64b82d7b846232716736a494d6727d50dc1adacd62ef1ab1de361e5c3498"} Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.960198 4787 generic.go:334] "Generic (PLEG): container finished" podID="f051e184-acac-47cf-9e04-9df648288715" containerID="8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748" exitCode=0 Jan 27 07:55:22 crc kubenswrapper[4787]: I0127 07:55:22.960230 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerDied","Data":"8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748"} Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.027867 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.119945 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.136847 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-config\") pod \"17fe0585-ad09-4585-9b4c-01698e2c295b\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.137262 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe0585-ad09-4585-9b4c-01698e2c295b-serving-cert\") pod \"17fe0585-ad09-4585-9b4c-01698e2c295b\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.138715 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b5xw\" (UniqueName: \"kubernetes.io/projected/17fe0585-ad09-4585-9b4c-01698e2c295b-kube-api-access-9b5xw\") pod \"17fe0585-ad09-4585-9b4c-01698e2c295b\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.138840 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-client-ca\") pod \"17fe0585-ad09-4585-9b4c-01698e2c295b\" (UID: \"17fe0585-ad09-4585-9b4c-01698e2c295b\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.138058 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-config" (OuterVolumeSpecName: "config") pod "17fe0585-ad09-4585-9b4c-01698e2c295b" (UID: "17fe0585-ad09-4585-9b4c-01698e2c295b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.139538 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-client-ca" (OuterVolumeSpecName: "client-ca") pod "17fe0585-ad09-4585-9b4c-01698e2c295b" (UID: "17fe0585-ad09-4585-9b4c-01698e2c295b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.145793 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fe0585-ad09-4585-9b4c-01698e2c295b-kube-api-access-9b5xw" (OuterVolumeSpecName: "kube-api-access-9b5xw") pod "17fe0585-ad09-4585-9b4c-01698e2c295b" (UID: "17fe0585-ad09-4585-9b4c-01698e2c295b"). InnerVolumeSpecName "kube-api-access-9b5xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.146013 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fe0585-ad09-4585-9b4c-01698e2c295b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17fe0585-ad09-4585-9b4c-01698e2c295b" (UID: "17fe0585-ad09-4585-9b4c-01698e2c295b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.240102 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816883bf-afa6-47b8-b823-7603ebf5cc71-serving-cert\") pod \"816883bf-afa6-47b8-b823-7603ebf5cc71\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.240192 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-client-ca\") pod \"816883bf-afa6-47b8-b823-7603ebf5cc71\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.240235 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-config\") pod \"816883bf-afa6-47b8-b823-7603ebf5cc71\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.240266 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-proxy-ca-bundles\") pod \"816883bf-afa6-47b8-b823-7603ebf5cc71\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.240308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7bl5\" (UniqueName: \"kubernetes.io/projected/816883bf-afa6-47b8-b823-7603ebf5cc71-kube-api-access-s7bl5\") pod \"816883bf-afa6-47b8-b823-7603ebf5cc71\" (UID: \"816883bf-afa6-47b8-b823-7603ebf5cc71\") " Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241296 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-client-ca" (OuterVolumeSpecName: "client-ca") pod "816883bf-afa6-47b8-b823-7603ebf5cc71" (UID: "816883bf-afa6-47b8-b823-7603ebf5cc71"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241392 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-config" (OuterVolumeSpecName: "config") pod "816883bf-afa6-47b8-b823-7603ebf5cc71" (UID: "816883bf-afa6-47b8-b823-7603ebf5cc71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241423 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "816883bf-afa6-47b8-b823-7603ebf5cc71" (UID: "816883bf-afa6-47b8-b823-7603ebf5cc71"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241753 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241788 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b5xw\" (UniqueName: \"kubernetes.io/projected/17fe0585-ad09-4585-9b4c-01698e2c295b-kube-api-access-9b5xw\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241807 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241822 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241836 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fe0585-ad09-4585-9b4c-01698e2c295b-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241847 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe0585-ad09-4585-9b4c-01698e2c295b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.241859 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/816883bf-afa6-47b8-b823-7603ebf5cc71-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.246184 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816883bf-afa6-47b8-b823-7603ebf5cc71-kube-api-access-s7bl5" (OuterVolumeSpecName: "kube-api-access-s7bl5") pod "816883bf-afa6-47b8-b823-7603ebf5cc71" (UID: "816883bf-afa6-47b8-b823-7603ebf5cc71"). InnerVolumeSpecName "kube-api-access-s7bl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.246639 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/816883bf-afa6-47b8-b823-7603ebf5cc71-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "816883bf-afa6-47b8-b823-7603ebf5cc71" (UID: "816883bf-afa6-47b8-b823-7603ebf5cc71"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.343168 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7bl5\" (UniqueName: \"kubernetes.io/projected/816883bf-afa6-47b8-b823-7603ebf5cc71-kube-api-access-s7bl5\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.343635 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816883bf-afa6-47b8-b823-7603ebf5cc71-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.847697 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm"] Jan 27 07:55:23 crc kubenswrapper[4787]: E0127 07:55:23.848088 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" containerName="extract-content" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848109 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" containerName="extract-content" Jan 27 07:55:23 crc kubenswrapper[4787]: E0127 07:55:23.848127 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fe0585-ad09-4585-9b4c-01698e2c295b" containerName="route-controller-manager" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848136 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fe0585-ad09-4585-9b4c-01698e2c295b" containerName="route-controller-manager" Jan 27 07:55:23 crc kubenswrapper[4787]: E0127 07:55:23.848161 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816883bf-afa6-47b8-b823-7603ebf5cc71" containerName="controller-manager" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848170 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="816883bf-afa6-47b8-b823-7603ebf5cc71" containerName="controller-manager" Jan 27 07:55:23 crc kubenswrapper[4787]: E0127 07:55:23.848183 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" containerName="extract-utilities" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848192 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" containerName="extract-utilities" Jan 27 07:55:23 crc kubenswrapper[4787]: E0127 07:55:23.848202 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" containerName="registry-server" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848211 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" containerName="registry-server" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848330 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0368ea79-94a8-42e3-8986-dddbec83d755" containerName="registry-server" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848351 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="816883bf-afa6-47b8-b823-7603ebf5cc71" containerName="controller-manager" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848364 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fe0585-ad09-4585-9b4c-01698e2c295b" containerName="route-controller-manager" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.848997 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.855859 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw"] Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.857908 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.864824 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw"] Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.868793 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm"] Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.950767 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7tn\" (UniqueName: \"kubernetes.io/projected/50febbec-2599-4f05-8482-938e97baa480-kube-api-access-gv7tn\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.950887 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50febbec-2599-4f05-8482-938e97baa480-serving-cert\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.950915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-proxy-ca-bundles\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.950939 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50febbec-2599-4f05-8482-938e97baa480-client-ca\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.950958 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wdxf\" (UniqueName: \"kubernetes.io/projected/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-kube-api-access-6wdxf\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.951053 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50febbec-2599-4f05-8482-938e97baa480-config\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.951117 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-client-ca\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.951168 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-config\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.951202 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-serving-cert\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.969071 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" event={"ID":"17fe0585-ad09-4585-9b4c-01698e2c295b","Type":"ContainerDied","Data":"5d2b198f58bbf5e2fc1127f374689f197db99d090e6ffaf6f08fba9b1669dc76"} Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.969115 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.969154 4787 scope.go:117] "RemoveContainer" containerID="dd74d0a47c882efd924ba6fb867cda61c83e9bb61f03bbd8c3f4c56cbfb22222" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.973177 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" event={"ID":"816883bf-afa6-47b8-b823-7603ebf5cc71","Type":"ContainerDied","Data":"c09f056e6b522f051bebfcf76d79e3302d7058b741907e39732f827b109cc806"} Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.973250 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-777857c966-2h66n" Jan 27 07:55:23 crc kubenswrapper[4787]: I0127 07:55:23.977225 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"ef7962ef772271e4959fec84d91ffb697a93484f02f3f1aa59cfda3d789d7c84"} Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.004812 4787 scope.go:117] "RemoveContainer" containerID="5f1d64b82d7b846232716736a494d6727d50dc1adacd62ef1ab1de361e5c3498" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.018390 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr"] Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.024046 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659f59795d-4hwcr"] Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.031169 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-777857c966-2h66n"] Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.034620 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-777857c966-2h66n"] Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053243 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50febbec-2599-4f05-8482-938e97baa480-config\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053337 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-client-ca\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053390 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-config\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053428 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-serving-cert\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053459 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv7tn\" (UniqueName: \"kubernetes.io/projected/50febbec-2599-4f05-8482-938e97baa480-kube-api-access-gv7tn\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053564 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50febbec-2599-4f05-8482-938e97baa480-serving-cert\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053591 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-proxy-ca-bundles\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053610 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50febbec-2599-4f05-8482-938e97baa480-client-ca\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.053629 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wdxf\" (UniqueName: \"kubernetes.io/projected/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-kube-api-access-6wdxf\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.057017 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50febbec-2599-4f05-8482-938e97baa480-config\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.054693 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50febbec-2599-4f05-8482-938e97baa480-client-ca\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.058313 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-config\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.059203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-client-ca\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.059873 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50febbec-2599-4f05-8482-938e97baa480-serving-cert\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.066356 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-serving-cert\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.071192 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv7tn\" (UniqueName: \"kubernetes.io/projected/50febbec-2599-4f05-8482-938e97baa480-kube-api-access-gv7tn\") pod \"route-controller-manager-77c8d84cd6-dnssw\" (UID: \"50febbec-2599-4f05-8482-938e97baa480\") " pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.072758 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-proxy-ca-bundles\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.076335 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wdxf\" (UniqueName: \"kubernetes.io/projected/bcf43325-3dcd-41bf-aa3f-8741d7cb20f7-kube-api-access-6wdxf\") pod \"controller-manager-7f8f44c5c4-qt2tm\" (UID: \"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7\") " pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.174506 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.183327 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.457937 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm"] Jan 27 07:55:24 crc kubenswrapper[4787]: W0127 07:55:24.476698 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf43325_3dcd_41bf_aa3f_8741d7cb20f7.slice/crio-9c919a94fa60b1a23da41c93e22175cf0959eb4014eddc15250287cd46a98e6b WatchSource:0}: Error finding container 9c919a94fa60b1a23da41c93e22175cf0959eb4014eddc15250287cd46a98e6b: Status 404 returned error can't find the container with id 9c919a94fa60b1a23da41c93e22175cf0959eb4014eddc15250287cd46a98e6b Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.512960 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw"] Jan 27 07:55:24 crc kubenswrapper[4787]: W0127 07:55:24.526712 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50febbec_2599_4f05_8482_938e97baa480.slice/crio-69eae831a34390bb4f70a1e1495a4d1c1c9d40a31b059ff9f27d0d326162fa1b WatchSource:0}: Error finding container 69eae831a34390bb4f70a1e1495a4d1c1c9d40a31b059ff9f27d0d326162fa1b: Status 404 returned error can't find the container with id 69eae831a34390bb4f70a1e1495a4d1c1c9d40a31b059ff9f27d0d326162fa1b Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.988071 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" event={"ID":"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7","Type":"ContainerStarted","Data":"5132e561fe24db2363792d271da994726d3188d66693d1e5bf841773ea1429e5"} Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.988680 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" event={"ID":"bcf43325-3dcd-41bf-aa3f-8741d7cb20f7","Type":"ContainerStarted","Data":"9c919a94fa60b1a23da41c93e22175cf0959eb4014eddc15250287cd46a98e6b"} Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.988722 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.991236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" event={"ID":"50febbec-2599-4f05-8482-938e97baa480","Type":"ContainerStarted","Data":"f212a3e2eaa7a378e3de8d39bbadf3523fe6313064da1af261e8fed4159ff76e"} Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.991294 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" event={"ID":"50febbec-2599-4f05-8482-938e97baa480","Type":"ContainerStarted","Data":"69eae831a34390bb4f70a1e1495a4d1c1c9d40a31b059ff9f27d0d326162fa1b"} Jan 27 07:55:24 crc kubenswrapper[4787]: I0127 07:55:24.994718 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" Jan 27 07:55:25 crc kubenswrapper[4787]: I0127 07:55:25.019326 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f8f44c5c4-qt2tm" podStartSLOduration=3.019304334 podStartE2EDuration="3.019304334s" podCreationTimestamp="2026-01-27 07:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:55:25.018332552 +0000 UTC m=+230.670688064" watchObservedRunningTime="2026-01-27 07:55:25.019304334 +0000 UTC m=+230.671659826" Jan 27 07:55:25 crc kubenswrapper[4787]: I0127 07:55:25.062381 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" podStartSLOduration=3.062356264 podStartE2EDuration="3.062356264s" podCreationTimestamp="2026-01-27 07:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:55:25.061028658 +0000 UTC m=+230.713384140" watchObservedRunningTime="2026-01-27 07:55:25.062356264 +0000 UTC m=+230.714711756" Jan 27 07:55:25 crc kubenswrapper[4787]: I0127 07:55:25.095191 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17fe0585-ad09-4585-9b4c-01698e2c295b" path="/var/lib/kubelet/pods/17fe0585-ad09-4585-9b4c-01698e2c295b/volumes" Jan 27 07:55:25 crc kubenswrapper[4787]: I0127 07:55:25.096105 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="816883bf-afa6-47b8-b823-7603ebf5cc71" path="/var/lib/kubelet/pods/816883bf-afa6-47b8-b823-7603ebf5cc71/volumes" Jan 27 07:55:25 crc kubenswrapper[4787]: I0127 07:55:25.998284 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:26 crc kubenswrapper[4787]: I0127 07:55:26.008898 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77c8d84cd6-dnssw" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.292262 4787 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.294576 4787 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.294754 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295055 4787 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295039 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df" gracePeriod=15 Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295074 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28" gracePeriod=15 Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295158 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5" gracePeriod=15 Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295248 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986" gracePeriod=15 Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295287 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b" gracePeriod=15 Jan 27 07:55:33 crc kubenswrapper[4787]: E0127 07:55:33.295662 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295714 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:55:33 crc kubenswrapper[4787]: E0127 07:55:33.295749 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295767 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:55:33 crc kubenswrapper[4787]: E0127 07:55:33.295788 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295860 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: E0127 07:55:33.295881 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295896 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:55:33 crc kubenswrapper[4787]: E0127 07:55:33.295919 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295933 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 07:55:33 crc kubenswrapper[4787]: E0127 07:55:33.295961 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.295976 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: E0127 07:55:33.296017 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296033 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296282 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296315 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296337 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296359 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296379 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296395 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296424 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:55:33 crc kubenswrapper[4787]: E0127 07:55:33.296686 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.296712 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.299643 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.430309 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.430811 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.430852 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.430902 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.430936 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.430965 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.430994 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.431028 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.532620 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.532691 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.532764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.532807 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.532848 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.532884 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.532931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.532996 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.533048 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.533110 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.533149 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.533194 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.533230 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.533247 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.533269 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:33 crc kubenswrapper[4787]: I0127 07:55:33.533287 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.086902 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.089151 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.090042 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28" exitCode=0 Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.090084 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b" exitCode=0 Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.090098 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5" exitCode=0 Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.090112 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986" exitCode=2 Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.090226 4787 scope.go:117] "RemoveContainer" containerID="879d855210b326ad3cbb964024c0b48b7373519deae8b974b9f5864416c15ef3" Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.093291 4787 generic.go:334] "Generic (PLEG): container finished" podID="d78a135f-a7de-4bb7-b2aa-168fb353658f" containerID="86b21223e074e0eb982b5516ae2faf73f9fccba44d21367b5f9b9319a1d62acc" exitCode=0 Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.093367 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d78a135f-a7de-4bb7-b2aa-168fb353658f","Type":"ContainerDied","Data":"86b21223e074e0eb982b5516ae2faf73f9fccba44d21367b5f9b9319a1d62acc"} Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.095429 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:34 crc kubenswrapper[4787]: E0127 07:55:34.176185 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:34 crc kubenswrapper[4787]: E0127 07:55:34.176772 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:34 crc kubenswrapper[4787]: E0127 07:55:34.177087 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:34 crc kubenswrapper[4787]: E0127 07:55:34.177258 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:34 crc kubenswrapper[4787]: E0127 07:55:34.177424 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:34 crc kubenswrapper[4787]: I0127 07:55:34.177449 4787 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 07:55:34 crc kubenswrapper[4787]: E0127 07:55:34.177626 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="200ms" Jan 27 07:55:34 crc kubenswrapper[4787]: E0127 07:55:34.379295 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="400ms" Jan 27 07:55:34 crc kubenswrapper[4787]: E0127 07:55:34.780913 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="800ms" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.087657 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.104076 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.510790 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.511873 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.575776 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-var-lock\") pod \"d78a135f-a7de-4bb7-b2aa-168fb353658f\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.576163 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-kubelet-dir\") pod \"d78a135f-a7de-4bb7-b2aa-168fb353658f\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.575906 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-var-lock" (OuterVolumeSpecName: "var-lock") pod "d78a135f-a7de-4bb7-b2aa-168fb353658f" (UID: "d78a135f-a7de-4bb7-b2aa-168fb353658f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.576245 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d78a135f-a7de-4bb7-b2aa-168fb353658f" (UID: "d78a135f-a7de-4bb7-b2aa-168fb353658f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.576343 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d78a135f-a7de-4bb7-b2aa-168fb353658f-kube-api-access\") pod \"d78a135f-a7de-4bb7-b2aa-168fb353658f\" (UID: \"d78a135f-a7de-4bb7-b2aa-168fb353658f\") " Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.576992 4787 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.577015 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d78a135f-a7de-4bb7-b2aa-168fb353658f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:35 crc kubenswrapper[4787]: E0127 07:55:35.582310 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="1.6s" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.589445 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78a135f-a7de-4bb7-b2aa-168fb353658f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d78a135f-a7de-4bb7-b2aa-168fb353658f" (UID: "d78a135f-a7de-4bb7-b2aa-168fb353658f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.679361 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d78a135f-a7de-4bb7-b2aa-168fb353658f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.681109 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.682405 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.682931 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.683103 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.780230 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.780355 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.780408 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.780821 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.780880 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.780911 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.882834 4787 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.882884 4787 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:35 crc kubenswrapper[4787]: I0127 07:55:35.882896 4787 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.114902 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d78a135f-a7de-4bb7-b2aa-168fb353658f","Type":"ContainerDied","Data":"c5a010ebcd473be981facf7a933739c86185647f5b265aa8a4c5d4030136ffd4"} Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.114946 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.114958 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5a010ebcd473be981facf7a933739c86185647f5b265aa8a4c5d4030136ffd4" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.119227 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.120181 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df" exitCode=0 Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.120255 4787 scope.go:117] "RemoveContainer" containerID="215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.120306 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.133185 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.133757 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.142387 4787 scope.go:117] "RemoveContainer" containerID="f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.142386 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.142958 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.158705 4787 scope.go:117] "RemoveContainer" containerID="53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.177167 4787 scope.go:117] "RemoveContainer" containerID="af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.193311 4787 scope.go:117] "RemoveContainer" containerID="35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.216303 4787 scope.go:117] "RemoveContainer" containerID="c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.236944 4787 scope.go:117] "RemoveContainer" containerID="215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28" Jan 27 07:55:36 crc kubenswrapper[4787]: E0127 07:55:36.237395 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\": container with ID starting with 215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28 not found: ID does not exist" containerID="215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.237435 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28"} err="failed to get container status \"215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\": rpc error: code = NotFound desc = could not find container \"215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28\": container with ID starting with 215bbc991c576293e40d9ba239e697bfad702383af70483ddf0acea311b4ae28 not found: ID does not exist" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.237463 4787 scope.go:117] "RemoveContainer" containerID="f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b" Jan 27 07:55:36 crc kubenswrapper[4787]: E0127 07:55:36.237960 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\": container with ID starting with f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b not found: ID does not exist" containerID="f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.237998 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b"} err="failed to get container status \"f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\": rpc error: code = NotFound desc = could not find container \"f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b\": container with ID starting with f3424e1f77a68fac6a8ccd12e018ce28850817c99e72cef7edbff0941124c46b not found: ID does not exist" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.238015 4787 scope.go:117] "RemoveContainer" containerID="53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5" Jan 27 07:55:36 crc kubenswrapper[4787]: E0127 07:55:36.238292 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\": container with ID starting with 53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5 not found: ID does not exist" containerID="53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.238322 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5"} err="failed to get container status \"53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\": rpc error: code = NotFound desc = could not find container \"53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5\": container with ID starting with 53dd18ffb0f39abccd8edc121977448f4c491f6be9acee866dde8cbeb7c52ab5 not found: ID does not exist" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.238337 4787 scope.go:117] "RemoveContainer" containerID="af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986" Jan 27 07:55:36 crc kubenswrapper[4787]: E0127 07:55:36.239265 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\": container with ID starting with af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986 not found: ID does not exist" containerID="af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.239292 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986"} err="failed to get container status \"af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\": rpc error: code = NotFound desc = could not find container \"af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986\": container with ID starting with af5452f3980a8a4614721cb60fffffb3b1f3f5d8d82928249857ed2dd3fb5986 not found: ID does not exist" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.239320 4787 scope.go:117] "RemoveContainer" containerID="35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df" Jan 27 07:55:36 crc kubenswrapper[4787]: E0127 07:55:36.239663 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\": container with ID starting with 35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df not found: ID does not exist" containerID="35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.239686 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df"} err="failed to get container status \"35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\": rpc error: code = NotFound desc = could not find container \"35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df\": container with ID starting with 35e90c0899d9ae9ea7cce3f5904955f1f3a80e147efbd766e9ed3b34014f40df not found: ID does not exist" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.239703 4787 scope.go:117] "RemoveContainer" containerID="c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07" Jan 27 07:55:36 crc kubenswrapper[4787]: E0127 07:55:36.239993 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\": container with ID starting with c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07 not found: ID does not exist" containerID="c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07" Jan 27 07:55:36 crc kubenswrapper[4787]: I0127 07:55:36.240016 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07"} err="failed to get container status \"c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\": rpc error: code = NotFound desc = could not find container \"c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07\": container with ID starting with c37afd8d2562157729aca3685d7abae2bb6b9e081c2b456706dde875fd762c07 not found: ID does not exist" Jan 27 07:55:37 crc kubenswrapper[4787]: I0127 07:55:37.086052 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 07:55:37 crc kubenswrapper[4787]: E0127 07:55:37.183773 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="3.2s" Jan 27 07:55:38 crc kubenswrapper[4787]: E0127 07:55:38.339862 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:38 crc kubenswrapper[4787]: I0127 07:55:38.340659 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:38 crc kubenswrapper[4787]: W0127 07:55:38.376276 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-21f446a50fff424bd33d3e9272ceb55cbd687e3d8ff6f7819f58b150d5a98b7d WatchSource:0}: Error finding container 21f446a50fff424bd33d3e9272ceb55cbd687e3d8ff6f7819f58b150d5a98b7d: Status 404 returned error can't find the container with id 21f446a50fff424bd33d3e9272ceb55cbd687e3d8ff6f7819f58b150d5a98b7d Jan 27 07:55:38 crc kubenswrapper[4787]: E0127 07:55:38.380653 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e875d4bfccc34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:55:38.379750452 +0000 UTC m=+244.032105944,LastTimestamp:2026-01-27 07:55:38.379750452 +0000 UTC m=+244.032105944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:55:39 crc kubenswrapper[4787]: I0127 07:55:39.141260 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef"} Jan 27 07:55:39 crc kubenswrapper[4787]: I0127 07:55:39.141910 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"21f446a50fff424bd33d3e9272ceb55cbd687e3d8ff6f7819f58b150d5a98b7d"} Jan 27 07:55:39 crc kubenswrapper[4787]: I0127 07:55:39.142749 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:39 crc kubenswrapper[4787]: E0127 07:55:39.142759 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:55:40 crc kubenswrapper[4787]: E0127 07:55:40.386182 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="6.4s" Jan 27 07:55:44 crc kubenswrapper[4787]: E0127 07:55:44.276388 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e875d4bfccc34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:55:38.379750452 +0000 UTC m=+244.032105944,LastTimestamp:2026-01-27 07:55:38.379750452 +0000 UTC m=+244.032105944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:55:45 crc kubenswrapper[4787]: I0127 07:55:45.082050 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:46 crc kubenswrapper[4787]: E0127 07:55:46.788539 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="7s" Jan 27 07:55:47 crc kubenswrapper[4787]: I0127 07:55:47.215798 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 07:55:47 crc kubenswrapper[4787]: I0127 07:55:47.216313 4787 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc" exitCode=1 Jan 27 07:55:47 crc kubenswrapper[4787]: I0127 07:55:47.216366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc"} Jan 27 07:55:47 crc kubenswrapper[4787]: I0127 07:55:47.217219 4787 scope.go:117] "RemoveContainer" containerID="5c114c2274358e777b240765cca81ea54a2e334002317a86595b7bfec95a11fc" Jan 27 07:55:47 crc kubenswrapper[4787]: I0127 07:55:47.217754 4787 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:47 crc kubenswrapper[4787]: I0127 07:55:47.218230 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:47 crc kubenswrapper[4787]: I0127 07:55:47.666318 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.076583 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.079133 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.079956 4787 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.097333 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.097382 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:48 crc kubenswrapper[4787]: E0127 07:55:48.098406 4787 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.099506 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:48 crc kubenswrapper[4787]: W0127 07:55:48.136480 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a3692298ea7f352b5f9fdfe2759bf58139ec526ee8697bef05964ee2aa853776 WatchSource:0}: Error finding container a3692298ea7f352b5f9fdfe2759bf58139ec526ee8697bef05964ee2aa853776: Status 404 returned error can't find the container with id a3692298ea7f352b5f9fdfe2759bf58139ec526ee8697bef05964ee2aa853776 Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.224461 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a3692298ea7f352b5f9fdfe2759bf58139ec526ee8697bef05964ee2aa853776"} Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.228382 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.228433 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"63a951246b7281cd3c5f01756adf84ab9703190160c4a4c451673d01fa32296b"} Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.230123 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:48 crc kubenswrapper[4787]: I0127 07:55:48.230982 4787 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:49 crc kubenswrapper[4787]: I0127 07:55:49.237097 4787 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="bf9ec3f1afa172cf32d1eed8b1918f285a270d70f45636c5b4822130ff205de0" exitCode=0 Jan 27 07:55:49 crc kubenswrapper[4787]: I0127 07:55:49.238444 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:49 crc kubenswrapper[4787]: I0127 07:55:49.238467 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:49 crc kubenswrapper[4787]: I0127 07:55:49.239005 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"bf9ec3f1afa172cf32d1eed8b1918f285a270d70f45636c5b4822130ff205de0"} Jan 27 07:55:49 crc kubenswrapper[4787]: I0127 07:55:49.239470 4787 status_manager.go:851] "Failed to get status for pod" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:49 crc kubenswrapper[4787]: E0127 07:55:49.239609 4787 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:49 crc kubenswrapper[4787]: I0127 07:55:49.239712 4787 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 27 07:55:50 crc kubenswrapper[4787]: I0127 07:55:50.247916 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"befb8460e9bf525a501a5835a5d4107a1c31c4466fc9376b13ef57093e2955d2"} Jan 27 07:55:50 crc kubenswrapper[4787]: I0127 07:55:50.248412 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb430e1a05cc1877596767b21be0c1e1b9816a42810838142ef45b31552ad0f1"} Jan 27 07:55:50 crc kubenswrapper[4787]: I0127 07:55:50.248432 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b49e20d24c460f7c54c5bc969385d0a3e09ae12ac5c86ff2845bf0b9b490713"} Jan 27 07:55:51 crc kubenswrapper[4787]: I0127 07:55:51.257231 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"baf0aad206f83118f5062666b5f30c32aa1bfc9fd4d7af7b5a41ac7173c54d59"} Jan 27 07:55:51 crc kubenswrapper[4787]: I0127 07:55:51.258456 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3400869e2e418edbb069a1afa9ed9ba38155a0c4bc80d704190d932b95fa1a0f"} Jan 27 07:55:51 crc kubenswrapper[4787]: I0127 07:55:51.258571 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:51 crc kubenswrapper[4787]: I0127 07:55:51.257726 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:51 crc kubenswrapper[4787]: I0127 07:55:51.258725 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:53 crc kubenswrapper[4787]: I0127 07:55:53.099652 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:53 crc kubenswrapper[4787]: I0127 07:55:53.099866 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:53 crc kubenswrapper[4787]: I0127 07:55:53.107034 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:54 crc kubenswrapper[4787]: I0127 07:55:54.957375 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:55:56 crc kubenswrapper[4787]: I0127 07:55:56.267402 4787 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:56 crc kubenswrapper[4787]: I0127 07:55:56.343179 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="32c63c91-e6ee-4a8f-bddd-a71cf503fa60" Jan 27 07:55:57 crc kubenswrapper[4787]: I0127 07:55:57.291069 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:57 crc kubenswrapper[4787]: I0127 07:55:57.291128 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:57 crc kubenswrapper[4787]: I0127 07:55:57.300936 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="32c63c91-e6ee-4a8f-bddd-a71cf503fa60" Jan 27 07:55:57 crc kubenswrapper[4787]: I0127 07:55:57.301779 4787 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://1b49e20d24c460f7c54c5bc969385d0a3e09ae12ac5c86ff2845bf0b9b490713" Jan 27 07:55:57 crc kubenswrapper[4787]: I0127 07:55:57.301810 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:55:57 crc kubenswrapper[4787]: I0127 07:55:57.666149 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:55:57 crc kubenswrapper[4787]: I0127 07:55:57.671151 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:55:58 crc kubenswrapper[4787]: I0127 07:55:58.297701 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:58 crc kubenswrapper[4787]: I0127 07:55:58.297751 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60eef58e-b2eb-43d9-a499-317083a89ca3" Jan 27 07:55:58 crc kubenswrapper[4787]: I0127 07:55:58.302259 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="32c63c91-e6ee-4a8f-bddd-a71cf503fa60" Jan 27 07:55:58 crc kubenswrapper[4787]: I0127 07:55:58.307611 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:56:05 crc kubenswrapper[4787]: I0127 07:56:05.749543 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 07:56:06 crc kubenswrapper[4787]: I0127 07:56:06.474838 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 07:56:06 crc kubenswrapper[4787]: I0127 07:56:06.601713 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 07:56:06 crc kubenswrapper[4787]: I0127 07:56:06.817174 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 07:56:06 crc kubenswrapper[4787]: I0127 07:56:06.948389 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.394151 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.397225 4787 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.406978 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.407087 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.415234 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.422897 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.445475 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=11.445450285 podStartE2EDuration="11.445450285s" podCreationTimestamp="2026-01-27 07:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:56:07.442250432 +0000 UTC m=+273.094605954" watchObservedRunningTime="2026-01-27 07:56:07.445450285 +0000 UTC m=+273.097805837" Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.601138 4787 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:56:07 crc kubenswrapper[4787]: I0127 07:56:07.601905 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef" gracePeriod=5 Jan 27 07:56:08 crc kubenswrapper[4787]: I0127 07:56:08.132622 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:56:08 crc kubenswrapper[4787]: I0127 07:56:08.152800 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 07:56:08 crc kubenswrapper[4787]: I0127 07:56:08.275956 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 07:56:08 crc kubenswrapper[4787]: I0127 07:56:08.294229 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 07:56:08 crc kubenswrapper[4787]: I0127 07:56:08.415088 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:56:08 crc kubenswrapper[4787]: I0127 07:56:08.450909 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 07:56:08 crc kubenswrapper[4787]: I0127 07:56:08.557462 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 07:56:08 crc kubenswrapper[4787]: I0127 07:56:08.621184 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.005006 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.064440 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.281652 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.286581 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.314229 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.320881 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.343786 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.534954 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.646392 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.684915 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.719403 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.796823 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.811891 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.828967 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.961797 4787 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 07:56:09 crc kubenswrapper[4787]: I0127 07:56:09.973810 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.026802 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.048055 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.157515 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.165384 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.221455 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.222930 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.228871 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.308875 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.393183 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.429602 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.512369 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.609038 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.613855 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.644978 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.669906 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.696626 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.807745 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.857593 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.875701 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.879925 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.916368 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:56:10 crc kubenswrapper[4787]: I0127 07:56:10.947017 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.081961 4787 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.104008 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.166154 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.174010 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.249701 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.300865 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.336430 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.343265 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.349167 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.363579 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.412543 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.445814 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.467859 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.481350 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.559642 4787 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.581372 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.758229 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.823757 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.885618 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.892192 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.893009 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.976272 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 07:56:11 crc kubenswrapper[4787]: I0127 07:56:11.989541 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.000371 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.002954 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.076534 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.091748 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.091874 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.135516 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.333708 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.397824 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.422460 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.441490 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.463356 4787 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.529708 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.541888 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.598316 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.639471 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.663000 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.759279 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.819350 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.882987 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.910180 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 07:56:12 crc kubenswrapper[4787]: I0127 07:56:12.931833 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.008317 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.113112 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.129465 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.176623 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.189749 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.189835 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.296064 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.296141 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.296208 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.296396 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.297281 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.297356 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.297461 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.297659 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.297795 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.297891 4787 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.297973 4787 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.310811 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.333810 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.387645 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.399490 4787 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.400373 4787 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.400485 4787 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.412423 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.412672 4787 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef" exitCode=137 Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.412760 4787 scope.go:117] "RemoveContainer" containerID="f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.412772 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.431695 4787 scope.go:117] "RemoveContainer" containerID="f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef" Jan 27 07:56:13 crc kubenswrapper[4787]: E0127 07:56:13.432345 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef\": container with ID starting with f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef not found: ID does not exist" containerID="f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.432383 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef"} err="failed to get container status \"f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef\": rpc error: code = NotFound desc = could not find container \"f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef\": container with ID starting with f4458c08183867e68d981a00f792886a32dfbdc05716b6e94fea396b9dc48aef not found: ID does not exist" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.518599 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.583841 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.610294 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.747355 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.802831 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.850280 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.882658 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 07:56:13 crc kubenswrapper[4787]: I0127 07:56:13.984767 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.000427 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.081675 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.137179 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.165834 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.198320 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.227263 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.235845 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.287714 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.329411 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.476791 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.510580 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.523489 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.677088 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.681224 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.709190 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.787322 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.793450 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.838805 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.906060 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.933313 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 07:56:14 crc kubenswrapper[4787]: I0127 07:56:14.957813 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.075242 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.086702 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.109228 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.161359 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.234078 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.286872 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.293413 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.299651 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.312338 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.339870 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.350481 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.392010 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.544269 4787 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.563102 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.631984 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.645380 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.671749 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.738506 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.762265 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.841373 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.849825 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.949328 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.987659 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:56:15 crc kubenswrapper[4787]: I0127 07:56:15.996794 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.008638 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.122654 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.174877 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.221155 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.307602 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.312423 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.323252 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.325981 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.326054 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.492215 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.513173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.569643 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.611362 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.661956 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.821792 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.826330 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.835618 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.844183 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.905538 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.936274 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.970208 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 07:56:16 crc kubenswrapper[4787]: I0127 07:56:16.993426 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.001744 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.103306 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.172660 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.289177 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.360054 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.360383 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.504888 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.573764 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.696813 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.798474 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.934426 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.935184 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.956907 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 07:56:17 crc kubenswrapper[4787]: I0127 07:56:17.991469 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.057178 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.125098 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.146766 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.150764 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.187975 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.199559 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.206465 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.261992 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.506639 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.617496 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.684998 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.750674 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.756884 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.844270 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.861823 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.862927 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.864411 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.869608 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.903616 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.957490 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.981794 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:56:18 crc kubenswrapper[4787]: I0127 07:56:18.984513 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.050685 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.077624 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.203543 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.225643 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.238406 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.281378 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.302791 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.383695 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.469480 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.480627 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.592490 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npc57"] Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.592959 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-npc57" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerName="registry-server" containerID="cri-o://3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79" gracePeriod=30 Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.611150 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.611610 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tnxzl"] Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.611914 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tnxzl" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="registry-server" containerID="cri-o://e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223" gracePeriod=30 Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.616745 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5d4g"] Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.616974 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" podUID="2e88d48e-6302-4a84-90a7-69446be90e4a" containerName="marketplace-operator" containerID="cri-o://380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef" gracePeriod=30 Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.618223 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.633255 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht6h9"] Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.633896 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ht6h9" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerName="registry-server" containerID="cri-o://c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864" gracePeriod=30 Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.645592 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkjnl"] Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.646067 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kkjnl" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="registry-server" containerID="cri-o://8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84" gracePeriod=30 Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.669210 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fkd29"] Jan 27 07:56:19 crc kubenswrapper[4787]: E0127 07:56:19.670379 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.670407 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:56:19 crc kubenswrapper[4787]: E0127 07:56:19.670431 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" containerName="installer" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.670440 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" containerName="installer" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.670588 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.670601 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78a135f-a7de-4bb7-b2aa-168fb353658f" containerName="installer" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.671393 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.681282 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fkd29"] Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.706419 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.799399 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39675fef-cac4-48c9-bd77-e0ee695a5ab8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.799516 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hm4\" (UniqueName: \"kubernetes.io/projected/39675fef-cac4-48c9-bd77-e0ee695a5ab8-kube-api-access-d5hm4\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.799613 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39675fef-cac4-48c9-bd77-e0ee695a5ab8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.901344 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39675fef-cac4-48c9-bd77-e0ee695a5ab8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.901438 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39675fef-cac4-48c9-bd77-e0ee695a5ab8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.901480 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hm4\" (UniqueName: \"kubernetes.io/projected/39675fef-cac4-48c9-bd77-e0ee695a5ab8-kube-api-access-d5hm4\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.904000 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39675fef-cac4-48c9-bd77-e0ee695a5ab8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.921397 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39675fef-cac4-48c9-bd77-e0ee695a5ab8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.926569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hm4\" (UniqueName: \"kubernetes.io/projected/39675fef-cac4-48c9-bd77-e0ee695a5ab8-kube-api-access-d5hm4\") pod \"marketplace-operator-79b997595-fkd29\" (UID: \"39675fef-cac4-48c9-bd77-e0ee695a5ab8\") " pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.967687 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 07:56:19 crc kubenswrapper[4787]: I0127 07:56:19.993430 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.037620 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.104529 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.154390 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.171040 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.171241 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.187801 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206133 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-utilities\") pod \"44cc5af2-0636-4589-a988-e7e32bfea075\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206211 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-trusted-ca\") pod \"2e88d48e-6302-4a84-90a7-69446be90e4a\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206240 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-utilities\") pod \"624e9a13-c9c5-4ef3-8628-056bfc65338b\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206272 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tmpp\" (UniqueName: \"kubernetes.io/projected/44cc5af2-0636-4589-a988-e7e32bfea075-kube-api-access-5tmpp\") pod \"44cc5af2-0636-4589-a988-e7e32bfea075\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206304 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkvcd\" (UniqueName: \"kubernetes.io/projected/624e9a13-c9c5-4ef3-8628-056bfc65338b-kube-api-access-nkvcd\") pod \"624e9a13-c9c5-4ef3-8628-056bfc65338b\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206363 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-operator-metrics\") pod \"2e88d48e-6302-4a84-90a7-69446be90e4a\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206437 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-catalog-content\") pod \"44cc5af2-0636-4589-a988-e7e32bfea075\" (UID: \"44cc5af2-0636-4589-a988-e7e32bfea075\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206490 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-catalog-content\") pod \"624e9a13-c9c5-4ef3-8628-056bfc65338b\" (UID: \"624e9a13-c9c5-4ef3-8628-056bfc65338b\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.206591 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl2ds\" (UniqueName: \"kubernetes.io/projected/2e88d48e-6302-4a84-90a7-69446be90e4a-kube-api-access-nl2ds\") pod \"2e88d48e-6302-4a84-90a7-69446be90e4a\" (UID: \"2e88d48e-6302-4a84-90a7-69446be90e4a\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.212514 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e88d48e-6302-4a84-90a7-69446be90e4a-kube-api-access-nl2ds" (OuterVolumeSpecName: "kube-api-access-nl2ds") pod "2e88d48e-6302-4a84-90a7-69446be90e4a" (UID: "2e88d48e-6302-4a84-90a7-69446be90e4a"). InnerVolumeSpecName "kube-api-access-nl2ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.213653 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-utilities" (OuterVolumeSpecName: "utilities") pod "624e9a13-c9c5-4ef3-8628-056bfc65338b" (UID: "624e9a13-c9c5-4ef3-8628-056bfc65338b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.214486 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2e88d48e-6302-4a84-90a7-69446be90e4a" (UID: "2e88d48e-6302-4a84-90a7-69446be90e4a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.214793 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-utilities" (OuterVolumeSpecName: "utilities") pod "44cc5af2-0636-4589-a988-e7e32bfea075" (UID: "44cc5af2-0636-4589-a988-e7e32bfea075"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.219060 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624e9a13-c9c5-4ef3-8628-056bfc65338b-kube-api-access-nkvcd" (OuterVolumeSpecName: "kube-api-access-nkvcd") pod "624e9a13-c9c5-4ef3-8628-056bfc65338b" (UID: "624e9a13-c9c5-4ef3-8628-056bfc65338b"). InnerVolumeSpecName "kube-api-access-nkvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.219989 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2e88d48e-6302-4a84-90a7-69446be90e4a" (UID: "2e88d48e-6302-4a84-90a7-69446be90e4a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.224087 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44cc5af2-0636-4589-a988-e7e32bfea075-kube-api-access-5tmpp" (OuterVolumeSpecName: "kube-api-access-5tmpp") pod "44cc5af2-0636-4589-a988-e7e32bfea075" (UID: "44cc5af2-0636-4589-a988-e7e32bfea075"). InnerVolumeSpecName "kube-api-access-5tmpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.269196 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "624e9a13-c9c5-4ef3-8628-056bfc65338b" (UID: "624e9a13-c9c5-4ef3-8628-056bfc65338b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.277210 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44cc5af2-0636-4589-a988-e7e32bfea075" (UID: "44cc5af2-0636-4589-a988-e7e32bfea075"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.301415 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308045 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-utilities\") pod \"7af45749-cd7f-490a-b2f6-bf979ea6467e\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308282 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-catalog-content\") pod \"7af45749-cd7f-490a-b2f6-bf979ea6467e\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308327 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz5bs\" (UniqueName: \"kubernetes.io/projected/7af45749-cd7f-490a-b2f6-bf979ea6467e-kube-api-access-wz5bs\") pod \"7af45749-cd7f-490a-b2f6-bf979ea6467e\" (UID: \"7af45749-cd7f-490a-b2f6-bf979ea6467e\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308576 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl2ds\" (UniqueName: \"kubernetes.io/projected/2e88d48e-6302-4a84-90a7-69446be90e4a-kube-api-access-nl2ds\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308596 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308606 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308616 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308626 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tmpp\" (UniqueName: \"kubernetes.io/projected/44cc5af2-0636-4589-a988-e7e32bfea075-kube-api-access-5tmpp\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308636 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkvcd\" (UniqueName: \"kubernetes.io/projected/624e9a13-c9c5-4ef3-8628-056bfc65338b-kube-api-access-nkvcd\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308649 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2e88d48e-6302-4a84-90a7-69446be90e4a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308658 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cc5af2-0636-4589-a988-e7e32bfea075-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.308669 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/624e9a13-c9c5-4ef3-8628-056bfc65338b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.310390 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-utilities" (OuterVolumeSpecName: "utilities") pod "7af45749-cd7f-490a-b2f6-bf979ea6467e" (UID: "7af45749-cd7f-490a-b2f6-bf979ea6467e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.314385 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af45749-cd7f-490a-b2f6-bf979ea6467e-kube-api-access-wz5bs" (OuterVolumeSpecName: "kube-api-access-wz5bs") pod "7af45749-cd7f-490a-b2f6-bf979ea6467e" (UID: "7af45749-cd7f-490a-b2f6-bf979ea6467e"). InnerVolumeSpecName "kube-api-access-wz5bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.318917 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.397938 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.409283 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lt5\" (UniqueName: \"kubernetes.io/projected/1e42281e-20ed-4105-866f-878ffbf6c6eb-kube-api-access-76lt5\") pod \"1e42281e-20ed-4105-866f-878ffbf6c6eb\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.409341 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-utilities\") pod \"1e42281e-20ed-4105-866f-878ffbf6c6eb\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.409527 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-catalog-content\") pod \"1e42281e-20ed-4105-866f-878ffbf6c6eb\" (UID: \"1e42281e-20ed-4105-866f-878ffbf6c6eb\") " Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.409773 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz5bs\" (UniqueName: \"kubernetes.io/projected/7af45749-cd7f-490a-b2f6-bf979ea6467e-kube-api-access-wz5bs\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.409788 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.411924 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-utilities" (OuterVolumeSpecName: "utilities") pod "1e42281e-20ed-4105-866f-878ffbf6c6eb" (UID: "1e42281e-20ed-4105-866f-878ffbf6c6eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.412831 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e42281e-20ed-4105-866f-878ffbf6c6eb-kube-api-access-76lt5" (OuterVolumeSpecName: "kube-api-access-76lt5") pod "1e42281e-20ed-4105-866f-878ffbf6c6eb" (UID: "1e42281e-20ed-4105-866f-878ffbf6c6eb"). InnerVolumeSpecName "kube-api-access-76lt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.420974 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7af45749-cd7f-490a-b2f6-bf979ea6467e" (UID: "7af45749-cd7f-490a-b2f6-bf979ea6467e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.427106 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.459130 4787 generic.go:334] "Generic (PLEG): container finished" podID="2e88d48e-6302-4a84-90a7-69446be90e4a" containerID="380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef" exitCode=0 Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.459210 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" event={"ID":"2e88d48e-6302-4a84-90a7-69446be90e4a","Type":"ContainerDied","Data":"380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.459249 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" event={"ID":"2e88d48e-6302-4a84-90a7-69446be90e4a","Type":"ContainerDied","Data":"080593f490256d1b9111cdbbbfb3bf578e73f07bd416e1297c741853a022c9d0"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.459271 4787 scope.go:117] "RemoveContainer" containerID="380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.459410 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h5d4g" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.467762 4787 generic.go:334] "Generic (PLEG): container finished" podID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerID="c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864" exitCode=0 Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.467870 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ht6h9" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.467882 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht6h9" event={"ID":"624e9a13-c9c5-4ef3-8628-056bfc65338b","Type":"ContainerDied","Data":"c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.467953 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht6h9" event={"ID":"624e9a13-c9c5-4ef3-8628-056bfc65338b","Type":"ContainerDied","Data":"c2980947d8538d4e3d4cb9e62697c884d7b7e8b6d4185152424643323c016212"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.476352 4787 generic.go:334] "Generic (PLEG): container finished" podID="44cc5af2-0636-4589-a988-e7e32bfea075" containerID="e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223" exitCode=0 Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.476477 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxzl" event={"ID":"44cc5af2-0636-4589-a988-e7e32bfea075","Type":"ContainerDied","Data":"e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.476536 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxzl" event={"ID":"44cc5af2-0636-4589-a988-e7e32bfea075","Type":"ContainerDied","Data":"81b4d3c6ee8e93f24d5baa8dea52cfb0d85fded6205579eedde915fa9ac03cb2"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.476590 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnxzl" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.480420 4787 generic.go:334] "Generic (PLEG): container finished" podID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerID="3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79" exitCode=0 Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.480627 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npc57" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.480787 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npc57" event={"ID":"7af45749-cd7f-490a-b2f6-bf979ea6467e","Type":"ContainerDied","Data":"3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.480835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npc57" event={"ID":"7af45749-cd7f-490a-b2f6-bf979ea6467e","Type":"ContainerDied","Data":"d4eb89ad2e8044f8b7ec7f7b1e0a8d0bd6b479506668e725c471bfff9235d2ce"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.480965 4787 scope.go:117] "RemoveContainer" containerID="380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.481539 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef\": container with ID starting with 380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef not found: ID does not exist" containerID="380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.481599 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef"} err="failed to get container status \"380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef\": rpc error: code = NotFound desc = could not find container \"380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef\": container with ID starting with 380d74a932668a4ea80abdd7fc668dad563b863ffa1063b090e363f00e977eef not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.481627 4787 scope.go:117] "RemoveContainer" containerID="c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.485924 4787 generic.go:334] "Generic (PLEG): container finished" podID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerID="8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84" exitCode=0 Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.485991 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkjnl" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.486002 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkjnl" event={"ID":"1e42281e-20ed-4105-866f-878ffbf6c6eb","Type":"ContainerDied","Data":"8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.486438 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkjnl" event={"ID":"1e42281e-20ed-4105-866f-878ffbf6c6eb","Type":"ContainerDied","Data":"69a14500beb72f949103663a0b1a2f611ef846aca8d992b4feec769e4103a913"} Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.508544 4787 scope.go:117] "RemoveContainer" containerID="ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.511586 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af45749-cd7f-490a-b2f6-bf979ea6467e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.511617 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.511629 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lt5\" (UniqueName: \"kubernetes.io/projected/1e42281e-20ed-4105-866f-878ffbf6c6eb-kube-api-access-76lt5\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.511775 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5d4g"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.519802 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5d4g"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.527731 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht6h9"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.534352 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht6h9"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.539611 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tnxzl"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.545826 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tnxzl"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.546982 4787 scope.go:117] "RemoveContainer" containerID="16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.550460 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npc57"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.553993 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-npc57"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.558356 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e42281e-20ed-4105-866f-878ffbf6c6eb" (UID: "1e42281e-20ed-4105-866f-878ffbf6c6eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.562353 4787 scope.go:117] "RemoveContainer" containerID="c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.563310 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864\": container with ID starting with c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864 not found: ID does not exist" containerID="c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.563367 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864"} err="failed to get container status \"c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864\": rpc error: code = NotFound desc = could not find container \"c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864\": container with ID starting with c1c232a68764a7e2a2ab4410f7d9d45a3f03c3775264008e4c5ac89fab212864 not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.563402 4787 scope.go:117] "RemoveContainer" containerID="ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.563913 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498\": container with ID starting with ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498 not found: ID does not exist" containerID="ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.563956 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498"} err="failed to get container status \"ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498\": rpc error: code = NotFound desc = could not find container \"ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498\": container with ID starting with ec5e3c96098a08d05b7b94cefe4b2f9c131e71b862786c006019c2dec6da5498 not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.563984 4787 scope.go:117] "RemoveContainer" containerID="16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.564333 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e\": container with ID starting with 16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e not found: ID does not exist" containerID="16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.564358 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e"} err="failed to get container status \"16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e\": rpc error: code = NotFound desc = could not find container \"16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e\": container with ID starting with 16105c08221c0a2bad9341270bf79d1c85ac086baa9a0934ed9322b6ec8a995e not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.564371 4787 scope.go:117] "RemoveContainer" containerID="e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.577746 4787 scope.go:117] "RemoveContainer" containerID="0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.592790 4787 scope.go:117] "RemoveContainer" containerID="5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.606347 4787 scope.go:117] "RemoveContainer" containerID="e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.606843 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223\": container with ID starting with e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223 not found: ID does not exist" containerID="e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.606873 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223"} err="failed to get container status \"e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223\": rpc error: code = NotFound desc = could not find container \"e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223\": container with ID starting with e3cf97ca374aafda820a97034b2b6a3f6e20f34e82f747d6f2cfb26dfc59a223 not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.606892 4787 scope.go:117] "RemoveContainer" containerID="0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.607338 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d\": container with ID starting with 0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d not found: ID does not exist" containerID="0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.607418 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d"} err="failed to get container status \"0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d\": rpc error: code = NotFound desc = could not find container \"0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d\": container with ID starting with 0cc6e3133b3811c2501b6fd4d34b20a999454792da7dd131df5d09583da6565d not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.607537 4787 scope.go:117] "RemoveContainer" containerID="5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.608359 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895\": container with ID starting with 5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895 not found: ID does not exist" containerID="5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.608449 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895"} err="failed to get container status \"5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895\": rpc error: code = NotFound desc = could not find container \"5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895\": container with ID starting with 5ca52b27fd7e43c704f8e05cf0d0277c2c4f718ac46a1e5031c9baa689471895 not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.608480 4787 scope.go:117] "RemoveContainer" containerID="3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.613078 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e42281e-20ed-4105-866f-878ffbf6c6eb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.628707 4787 scope.go:117] "RemoveContainer" containerID="629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.657382 4787 scope.go:117] "RemoveContainer" containerID="ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.680696 4787 scope.go:117] "RemoveContainer" containerID="3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.685076 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79\": container with ID starting with 3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79 not found: ID does not exist" containerID="3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.685153 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79"} err="failed to get container status \"3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79\": rpc error: code = NotFound desc = could not find container \"3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79\": container with ID starting with 3f0dd71506f96b5f1d29b34337fb17e1db8a01c5d142ebe84263d8b6c18e1d79 not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.685201 4787 scope.go:117] "RemoveContainer" containerID="629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.685748 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d\": container with ID starting with 629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d not found: ID does not exist" containerID="629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.685799 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d"} err="failed to get container status \"629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d\": rpc error: code = NotFound desc = could not find container \"629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d\": container with ID starting with 629cf5f25077b459468d115da73f8db540be3990ca6244c0ac21ff90a31aed2d not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.685825 4787 scope.go:117] "RemoveContainer" containerID="ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.686291 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733\": container with ID starting with ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733 not found: ID does not exist" containerID="ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.686348 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733"} err="failed to get container status \"ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733\": rpc error: code = NotFound desc = could not find container \"ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733\": container with ID starting with ae2a0db24a496532988d8c9c232f3f25c682b2d988bff4fb3dec2a81481a0733 not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.686392 4787 scope.go:117] "RemoveContainer" containerID="8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.688173 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fkd29"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.693709 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 07:56:20 crc kubenswrapper[4787]: W0127 07:56:20.696947 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39675fef_cac4_48c9_bd77_e0ee695a5ab8.slice/crio-eca5e6848d5a7c0627f7e7c3d5db9faa8e9c4bb27962c92bfa01ff60756ecf24 WatchSource:0}: Error finding container eca5e6848d5a7c0627f7e7c3d5db9faa8e9c4bb27962c92bfa01ff60756ecf24: Status 404 returned error can't find the container with id eca5e6848d5a7c0627f7e7c3d5db9faa8e9c4bb27962c92bfa01ff60756ecf24 Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.716534 4787 scope.go:117] "RemoveContainer" containerID="aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.745008 4787 scope.go:117] "RemoveContainer" containerID="71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.828436 4787 scope.go:117] "RemoveContainer" containerID="8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.829139 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84\": container with ID starting with 8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84 not found: ID does not exist" containerID="8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.829212 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84"} err="failed to get container status \"8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84\": rpc error: code = NotFound desc = could not find container \"8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84\": container with ID starting with 8cb3ae0fcd39b44a275d8cd5c91a52e737b682bd55f4da55edd1a06941315d84 not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.829256 4787 scope.go:117] "RemoveContainer" containerID="aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.829908 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd\": container with ID starting with aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd not found: ID does not exist" containerID="aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.830002 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd"} err="failed to get container status \"aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd\": rpc error: code = NotFound desc = could not find container \"aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd\": container with ID starting with aba819bb87de56fd8e57aa84a316cf4c3386292e3c4ce220f195f806f8261dfd not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.830046 4787 scope.go:117] "RemoveContainer" containerID="71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.830406 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 07:56:20 crc kubenswrapper[4787]: E0127 07:56:20.830693 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2\": container with ID starting with 71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2 not found: ID does not exist" containerID="71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.830727 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2"} err="failed to get container status \"71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2\": rpc error: code = NotFound desc = could not find container \"71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2\": container with ID starting with 71a0fd62f051e1733036586c55bfd4c513668f6b2af829fed34f76f2fd16d4a2 not found: ID does not exist" Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.900111 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkjnl"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.903338 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kkjnl"] Jan 27 07:56:20 crc kubenswrapper[4787]: I0127 07:56:20.962716 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.089530 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" path="/var/lib/kubelet/pods/1e42281e-20ed-4105-866f-878ffbf6c6eb/volumes" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.091386 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e88d48e-6302-4a84-90a7-69446be90e4a" path="/var/lib/kubelet/pods/2e88d48e-6302-4a84-90a7-69446be90e4a/volumes" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.092270 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" path="/var/lib/kubelet/pods/44cc5af2-0636-4589-a988-e7e32bfea075/volumes" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.093914 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" path="/var/lib/kubelet/pods/624e9a13-c9c5-4ef3-8628-056bfc65338b/volumes" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.094836 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" path="/var/lib/kubelet/pods/7af45749-cd7f-490a-b2f6-bf979ea6467e/volumes" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.186288 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.270793 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.290935 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.343568 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.498864 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" event={"ID":"39675fef-cac4-48c9-bd77-e0ee695a5ab8","Type":"ContainerStarted","Data":"57f56124a0e8fcf97a1c8bde7c6849d7575aae5bfa455464c971f808ebdc6c3b"} Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.498941 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" event={"ID":"39675fef-cac4-48c9-bd77-e0ee695a5ab8","Type":"ContainerStarted","Data":"eca5e6848d5a7c0627f7e7c3d5db9faa8e9c4bb27962c92bfa01ff60756ecf24"} Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.499194 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.503328 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.517776 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fkd29" podStartSLOduration=2.517751002 podStartE2EDuration="2.517751002s" podCreationTimestamp="2026-01-27 07:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:56:21.51690414 +0000 UTC m=+287.169259672" watchObservedRunningTime="2026-01-27 07:56:21.517751002 +0000 UTC m=+287.170106504" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.600108 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.630827 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 07:56:21 crc kubenswrapper[4787]: I0127 07:56:21.933434 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 07:56:22 crc kubenswrapper[4787]: I0127 07:56:22.000359 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 07:56:22 crc kubenswrapper[4787]: I0127 07:56:22.224294 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 07:56:33 crc kubenswrapper[4787]: I0127 07:56:33.863959 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 07:56:34 crc kubenswrapper[4787]: I0127 07:56:34.858681 4787 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 07:56:38 crc kubenswrapper[4787]: I0127 07:56:38.075438 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 07:56:46 crc kubenswrapper[4787]: I0127 07:56:46.626608 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 07:56:54 crc kubenswrapper[4787]: I0127 07:56:54.253544 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 07:56:56 crc kubenswrapper[4787]: I0127 07:56:56.540209 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 07:56:58 crc kubenswrapper[4787]: I0127 07:56:58.230284 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 07:57:03 crc kubenswrapper[4787]: I0127 07:57:03.196018 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.873923 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjx52"] Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.874915 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.874939 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.874963 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerName="extract-utilities" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.874977 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerName="extract-utilities" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.874994 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875006 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875020 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e88d48e-6302-4a84-90a7-69446be90e4a" containerName="marketplace-operator" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875028 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e88d48e-6302-4a84-90a7-69446be90e4a" containerName="marketplace-operator" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875038 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="extract-content" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875046 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="extract-content" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875055 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875063 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875075 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875084 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875095 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerName="extract-content" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875103 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerName="extract-content" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875116 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerName="extract-utilities" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875125 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerName="extract-utilities" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875137 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerName="extract-content" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875145 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerName="extract-content" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875157 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="extract-utilities" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875165 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="extract-utilities" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875179 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="extract-utilities" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875188 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="extract-utilities" Jan 27 07:57:32 crc kubenswrapper[4787]: E0127 07:57:32.875205 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="extract-content" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875213 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="extract-content" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875338 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="624e9a13-c9c5-4ef3-8628-056bfc65338b" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875354 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="44cc5af2-0636-4589-a988-e7e32bfea075" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875367 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e88d48e-6302-4a84-90a7-69446be90e4a" containerName="marketplace-operator" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875381 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af45749-cd7f-490a-b2f6-bf979ea6467e" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.875391 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e42281e-20ed-4105-866f-878ffbf6c6eb" containerName="registry-server" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.876416 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.879945 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.887629 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjx52"] Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.983296 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htqgr\" (UniqueName: \"kubernetes.io/projected/6953906d-8a35-4d3e-83a3-3a7451e834cc-kube-api-access-htqgr\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.983374 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6953906d-8a35-4d3e-83a3-3a7451e834cc-catalog-content\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:32 crc kubenswrapper[4787]: I0127 07:57:32.983515 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6953906d-8a35-4d3e-83a3-3a7451e834cc-utilities\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.070711 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdpsb"] Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.071871 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.075176 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.084914 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6953906d-8a35-4d3e-83a3-3a7451e834cc-utilities\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.085412 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6953906d-8a35-4d3e-83a3-3a7451e834cc-utilities\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.085535 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htqgr\" (UniqueName: \"kubernetes.io/projected/6953906d-8a35-4d3e-83a3-3a7451e834cc-kube-api-access-htqgr\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.085593 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6953906d-8a35-4d3e-83a3-3a7451e834cc-catalog-content\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.087849 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6953906d-8a35-4d3e-83a3-3a7451e834cc-catalog-content\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.106344 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdpsb"] Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.123792 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htqgr\" (UniqueName: \"kubernetes.io/projected/6953906d-8a35-4d3e-83a3-3a7451e834cc-kube-api-access-htqgr\") pod \"community-operators-gjx52\" (UID: \"6953906d-8a35-4d3e-83a3-3a7451e834cc\") " pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.189384 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6daca56-9e67-41d1-80da-c213717daead-utilities\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.189751 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6daca56-9e67-41d1-80da-c213717daead-catalog-content\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.189874 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rj2l\" (UniqueName: \"kubernetes.io/projected/b6daca56-9e67-41d1-80da-c213717daead-kube-api-access-6rj2l\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.199571 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.291828 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6daca56-9e67-41d1-80da-c213717daead-utilities\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.291922 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6daca56-9e67-41d1-80da-c213717daead-catalog-content\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.291960 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rj2l\" (UniqueName: \"kubernetes.io/projected/b6daca56-9e67-41d1-80da-c213717daead-kube-api-access-6rj2l\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.292485 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6daca56-9e67-41d1-80da-c213717daead-catalog-content\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.292754 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6daca56-9e67-41d1-80da-c213717daead-utilities\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.320828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rj2l\" (UniqueName: \"kubernetes.io/projected/b6daca56-9e67-41d1-80da-c213717daead-kube-api-access-6rj2l\") pod \"redhat-marketplace-rdpsb\" (UID: \"b6daca56-9e67-41d1-80da-c213717daead\") " pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.392214 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.639329 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjx52"] Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.800812 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdpsb"] Jan 27 07:57:33 crc kubenswrapper[4787]: W0127 07:57:33.802934 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6daca56_9e67_41d1_80da_c213717daead.slice/crio-17ce87677e9bd3fe72210433712c6e413796e79c3f921bc7cf6cb6fca1581d2f WatchSource:0}: Error finding container 17ce87677e9bd3fe72210433712c6e413796e79c3f921bc7cf6cb6fca1581d2f: Status 404 returned error can't find the container with id 17ce87677e9bd3fe72210433712c6e413796e79c3f921bc7cf6cb6fca1581d2f Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.950156 4787 generic.go:334] "Generic (PLEG): container finished" podID="6953906d-8a35-4d3e-83a3-3a7451e834cc" containerID="3590d6b4bb06bc944ba0e13d5162ca1a3baa9bd382eb5d6bae622b4765fc0e6e" exitCode=0 Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.950631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx52" event={"ID":"6953906d-8a35-4d3e-83a3-3a7451e834cc","Type":"ContainerDied","Data":"3590d6b4bb06bc944ba0e13d5162ca1a3baa9bd382eb5d6bae622b4765fc0e6e"} Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.950787 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx52" event={"ID":"6953906d-8a35-4d3e-83a3-3a7451e834cc","Type":"ContainerStarted","Data":"2d86cdc40f197a96e1ca231fa9c6b0844138a5f041e3afec28314ca6d04f9d50"} Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.959368 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdpsb" event={"ID":"b6daca56-9e67-41d1-80da-c213717daead","Type":"ContainerStarted","Data":"77ec79c6badfdeac1d0feceb0847c6998ccb5e4ff37bceb496a918544cd170b9"} Jan 27 07:57:33 crc kubenswrapper[4787]: I0127 07:57:33.959425 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdpsb" event={"ID":"b6daca56-9e67-41d1-80da-c213717daead","Type":"ContainerStarted","Data":"17ce87677e9bd3fe72210433712c6e413796e79c3f921bc7cf6cb6fca1581d2f"} Jan 27 07:57:34 crc kubenswrapper[4787]: I0127 07:57:34.966415 4787 generic.go:334] "Generic (PLEG): container finished" podID="b6daca56-9e67-41d1-80da-c213717daead" containerID="77ec79c6badfdeac1d0feceb0847c6998ccb5e4ff37bceb496a918544cd170b9" exitCode=0 Jan 27 07:57:34 crc kubenswrapper[4787]: I0127 07:57:34.966466 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdpsb" event={"ID":"b6daca56-9e67-41d1-80da-c213717daead","Type":"ContainerDied","Data":"77ec79c6badfdeac1d0feceb0847c6998ccb5e4ff37bceb496a918544cd170b9"} Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.268271 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8bz6z"] Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.276022 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.276261 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8bz6z"] Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.278857 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.422302 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-catalog-content\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.422401 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-utilities\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.422437 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfqx\" (UniqueName: \"kubernetes.io/projected/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-kube-api-access-vrfqx\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.475739 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bjssk"] Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.479188 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.484323 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bjssk"] Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.485237 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.524237 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-catalog-content\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.524409 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-utilities\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.524459 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfqx\" (UniqueName: \"kubernetes.io/projected/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-kube-api-access-vrfqx\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.525569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-catalog-content\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.526218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-utilities\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.549618 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfqx\" (UniqueName: \"kubernetes.io/projected/3c2c8464-ea72-46ca-a23d-ae23e6617ce8-kube-api-access-vrfqx\") pod \"redhat-operators-8bz6z\" (UID: \"3c2c8464-ea72-46ca-a23d-ae23e6617ce8\") " pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.616361 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.628589 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56eb4b72-a8dc-4300-882e-e29d83442af5-catalog-content\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.628665 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xn5x\" (UniqueName: \"kubernetes.io/projected/56eb4b72-a8dc-4300-882e-e29d83442af5-kube-api-access-9xn5x\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.628700 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56eb4b72-a8dc-4300-882e-e29d83442af5-utilities\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.730309 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xn5x\" (UniqueName: \"kubernetes.io/projected/56eb4b72-a8dc-4300-882e-e29d83442af5-kube-api-access-9xn5x\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.730389 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56eb4b72-a8dc-4300-882e-e29d83442af5-utilities\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.730485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56eb4b72-a8dc-4300-882e-e29d83442af5-catalog-content\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.731039 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56eb4b72-a8dc-4300-882e-e29d83442af5-catalog-content\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.732182 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56eb4b72-a8dc-4300-882e-e29d83442af5-utilities\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.760935 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xn5x\" (UniqueName: \"kubernetes.io/projected/56eb4b72-a8dc-4300-882e-e29d83442af5-kube-api-access-9xn5x\") pod \"certified-operators-bjssk\" (UID: \"56eb4b72-a8dc-4300-882e-e29d83442af5\") " pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.807673 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.839471 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8bz6z"] Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.975030 4787 generic.go:334] "Generic (PLEG): container finished" podID="b6daca56-9e67-41d1-80da-c213717daead" containerID="452a6e75cc64f0f6e72889906fdf691f86ceddac279d22cf78027918918f427b" exitCode=0 Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.975117 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdpsb" event={"ID":"b6daca56-9e67-41d1-80da-c213717daead","Type":"ContainerDied","Data":"452a6e75cc64f0f6e72889906fdf691f86ceddac279d22cf78027918918f427b"} Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.981799 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bz6z" event={"ID":"3c2c8464-ea72-46ca-a23d-ae23e6617ce8","Type":"ContainerStarted","Data":"1a55db6079e4e2a4a90bebff02db8967f3bdc067060b1b8a2a9ef07821d7ec0e"} Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.987047 4787 generic.go:334] "Generic (PLEG): container finished" podID="6953906d-8a35-4d3e-83a3-3a7451e834cc" containerID="87c497327bb297ed101f098db88efc948651c4ea93ebe1b661e3bbbd9fe3b87e" exitCode=0 Jan 27 07:57:35 crc kubenswrapper[4787]: I0127 07:57:35.987110 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx52" event={"ID":"6953906d-8a35-4d3e-83a3-3a7451e834cc","Type":"ContainerDied","Data":"87c497327bb297ed101f098db88efc948651c4ea93ebe1b661e3bbbd9fe3b87e"} Jan 27 07:57:36 crc kubenswrapper[4787]: I0127 07:57:36.015943 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bjssk"] Jan 27 07:57:36 crc kubenswrapper[4787]: W0127 07:57:36.099681 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56eb4b72_a8dc_4300_882e_e29d83442af5.slice/crio-26ad75d0b6e5df9310ef03340acda404c5e435f68b2579f3d72291bcb30d5c0e WatchSource:0}: Error finding container 26ad75d0b6e5df9310ef03340acda404c5e435f68b2579f3d72291bcb30d5c0e: Status 404 returned error can't find the container with id 26ad75d0b6e5df9310ef03340acda404c5e435f68b2579f3d72291bcb30d5c0e Jan 27 07:57:36 crc kubenswrapper[4787]: I0127 07:57:36.995690 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjx52" event={"ID":"6953906d-8a35-4d3e-83a3-3a7451e834cc","Type":"ContainerStarted","Data":"3ba512f602a55355a937d1e0176c09fa5d943aae8d30ab63c2d2b4257d1ab4d3"} Jan 27 07:57:36 crc kubenswrapper[4787]: I0127 07:57:36.996756 4787 generic.go:334] "Generic (PLEG): container finished" podID="56eb4b72-a8dc-4300-882e-e29d83442af5" containerID="d3ea1a5612859ddae609b3ec66783a57dbe865e2cdb441b8e18db1598e6abe3b" exitCode=0 Jan 27 07:57:36 crc kubenswrapper[4787]: I0127 07:57:36.996813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjssk" event={"ID":"56eb4b72-a8dc-4300-882e-e29d83442af5","Type":"ContainerDied","Data":"d3ea1a5612859ddae609b3ec66783a57dbe865e2cdb441b8e18db1598e6abe3b"} Jan 27 07:57:36 crc kubenswrapper[4787]: I0127 07:57:36.996839 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjssk" event={"ID":"56eb4b72-a8dc-4300-882e-e29d83442af5","Type":"ContainerStarted","Data":"26ad75d0b6e5df9310ef03340acda404c5e435f68b2579f3d72291bcb30d5c0e"} Jan 27 07:57:36 crc kubenswrapper[4787]: I0127 07:57:36.999776 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdpsb" event={"ID":"b6daca56-9e67-41d1-80da-c213717daead","Type":"ContainerStarted","Data":"7d0d94088a216e93ead0da66f75cca6ceb6f5d7752af44e0199239794d47a603"} Jan 27 07:57:37 crc kubenswrapper[4787]: I0127 07:57:37.004770 4787 generic.go:334] "Generic (PLEG): container finished" podID="3c2c8464-ea72-46ca-a23d-ae23e6617ce8" containerID="d22d20ce2f8b93a65bfab82a4cce35acf4607e4bdf62e7094833f70e242b2c6e" exitCode=0 Jan 27 07:57:37 crc kubenswrapper[4787]: I0127 07:57:37.004815 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bz6z" event={"ID":"3c2c8464-ea72-46ca-a23d-ae23e6617ce8","Type":"ContainerDied","Data":"d22d20ce2f8b93a65bfab82a4cce35acf4607e4bdf62e7094833f70e242b2c6e"} Jan 27 07:57:37 crc kubenswrapper[4787]: I0127 07:57:37.027803 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjx52" podStartSLOduration=2.281031369 podStartE2EDuration="5.027782652s" podCreationTimestamp="2026-01-27 07:57:32 +0000 UTC" firstStartedPulling="2026-01-27 07:57:33.952263828 +0000 UTC m=+359.604619340" lastFinishedPulling="2026-01-27 07:57:36.699015131 +0000 UTC m=+362.351370623" observedRunningTime="2026-01-27 07:57:37.024033935 +0000 UTC m=+362.676389427" watchObservedRunningTime="2026-01-27 07:57:37.027782652 +0000 UTC m=+362.680138144" Jan 27 07:57:37 crc kubenswrapper[4787]: I0127 07:57:37.069122 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdpsb" podStartSLOduration=1.6315764320000001 podStartE2EDuration="4.069103309s" podCreationTimestamp="2026-01-27 07:57:33 +0000 UTC" firstStartedPulling="2026-01-27 07:57:33.961488666 +0000 UTC m=+359.613844158" lastFinishedPulling="2026-01-27 07:57:36.399015543 +0000 UTC m=+362.051371035" observedRunningTime="2026-01-27 07:57:37.066393229 +0000 UTC m=+362.718748721" watchObservedRunningTime="2026-01-27 07:57:37.069103309 +0000 UTC m=+362.721458801" Jan 27 07:57:38 crc kubenswrapper[4787]: I0127 07:57:38.012005 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bz6z" event={"ID":"3c2c8464-ea72-46ca-a23d-ae23e6617ce8","Type":"ContainerStarted","Data":"41cdb4514795bd2b27764d78141be18c4c2c76a4058e675c9a517264c45ac8ad"} Jan 27 07:57:38 crc kubenswrapper[4787]: I0127 07:57:38.014881 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjssk" event={"ID":"56eb4b72-a8dc-4300-882e-e29d83442af5","Type":"ContainerStarted","Data":"f44e6cb9089ace99223ebb1f10a8492aee7f149bbb8b863b35bedd0766f59398"} Jan 27 07:57:39 crc kubenswrapper[4787]: I0127 07:57:39.025007 4787 generic.go:334] "Generic (PLEG): container finished" podID="56eb4b72-a8dc-4300-882e-e29d83442af5" containerID="f44e6cb9089ace99223ebb1f10a8492aee7f149bbb8b863b35bedd0766f59398" exitCode=0 Jan 27 07:57:39 crc kubenswrapper[4787]: I0127 07:57:39.025128 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjssk" event={"ID":"56eb4b72-a8dc-4300-882e-e29d83442af5","Type":"ContainerDied","Data":"f44e6cb9089ace99223ebb1f10a8492aee7f149bbb8b863b35bedd0766f59398"} Jan 27 07:57:39 crc kubenswrapper[4787]: I0127 07:57:39.028660 4787 generic.go:334] "Generic (PLEG): container finished" podID="3c2c8464-ea72-46ca-a23d-ae23e6617ce8" containerID="41cdb4514795bd2b27764d78141be18c4c2c76a4058e675c9a517264c45ac8ad" exitCode=0 Jan 27 07:57:39 crc kubenswrapper[4787]: I0127 07:57:39.028717 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bz6z" event={"ID":"3c2c8464-ea72-46ca-a23d-ae23e6617ce8","Type":"ContainerDied","Data":"41cdb4514795bd2b27764d78141be18c4c2c76a4058e675c9a517264c45ac8ad"} Jan 27 07:57:40 crc kubenswrapper[4787]: I0127 07:57:40.057466 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bz6z" event={"ID":"3c2c8464-ea72-46ca-a23d-ae23e6617ce8","Type":"ContainerStarted","Data":"66c4152080012d9a86134e18a8ea684c2b308eb6bfb64b1c394cbf5e8914e450"} Jan 27 07:57:40 crc kubenswrapper[4787]: I0127 07:57:40.077045 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8bz6z" podStartSLOduration=2.404908954 podStartE2EDuration="5.077004828s" podCreationTimestamp="2026-01-27 07:57:35 +0000 UTC" firstStartedPulling="2026-01-27 07:57:37.006569495 +0000 UTC m=+362.658924987" lastFinishedPulling="2026-01-27 07:57:39.678665359 +0000 UTC m=+365.331020861" observedRunningTime="2026-01-27 07:57:40.074681588 +0000 UTC m=+365.727037090" watchObservedRunningTime="2026-01-27 07:57:40.077004828 +0000 UTC m=+365.729360320" Jan 27 07:57:40 crc kubenswrapper[4787]: I0127 07:57:40.077914 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjssk" event={"ID":"56eb4b72-a8dc-4300-882e-e29d83442af5","Type":"ContainerStarted","Data":"eefecb3bb6d66c896447432f595d922732a7f7c819e2dd97bafc99e41a7c5a27"} Jan 27 07:57:40 crc kubenswrapper[4787]: I0127 07:57:40.100607 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bjssk" podStartSLOduration=2.6588983219999998 podStartE2EDuration="5.100583996s" podCreationTimestamp="2026-01-27 07:57:35 +0000 UTC" firstStartedPulling="2026-01-27 07:57:36.998784803 +0000 UTC m=+362.651140295" lastFinishedPulling="2026-01-27 07:57:39.440470477 +0000 UTC m=+365.092825969" observedRunningTime="2026-01-27 07:57:40.097660441 +0000 UTC m=+365.750015953" watchObservedRunningTime="2026-01-27 07:57:40.100583996 +0000 UTC m=+365.752939498" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.200099 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.200703 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.259406 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.393434 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.393945 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.462124 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.772404 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ln8k8"] Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.773279 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.842304 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ln8k8"] Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.870756 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2a764dc-d301-4cef-aa6d-67f785db5868-trusted-ca\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.870840 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c2a764dc-d301-4cef-aa6d-67f785db5868-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.870878 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfbsj\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-kube-api-access-cfbsj\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.871081 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c2a764dc-d301-4cef-aa6d-67f785db5868-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.871136 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-registry-tls\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.871175 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c2a764dc-d301-4cef-aa6d-67f785db5868-registry-certificates\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.871206 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-bound-sa-token\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.871294 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.896382 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.973088 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2a764dc-d301-4cef-aa6d-67f785db5868-trusted-ca\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.973233 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c2a764dc-d301-4cef-aa6d-67f785db5868-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.973275 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfbsj\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-kube-api-access-cfbsj\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.973306 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c2a764dc-d301-4cef-aa6d-67f785db5868-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.973350 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-registry-tls\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.973400 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c2a764dc-d301-4cef-aa6d-67f785db5868-registry-certificates\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.973429 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-bound-sa-token\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.974184 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c2a764dc-d301-4cef-aa6d-67f785db5868-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.974861 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2a764dc-d301-4cef-aa6d-67f785db5868-trusted-ca\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.975432 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c2a764dc-d301-4cef-aa6d-67f785db5868-registry-certificates\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.980747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-registry-tls\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.981250 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c2a764dc-d301-4cef-aa6d-67f785db5868-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.992326 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-bound-sa-token\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:43 crc kubenswrapper[4787]: I0127 07:57:43.994043 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfbsj\" (UniqueName: \"kubernetes.io/projected/c2a764dc-d301-4cef-aa6d-67f785db5868-kube-api-access-cfbsj\") pod \"image-registry-66df7c8f76-ln8k8\" (UID: \"c2a764dc-d301-4cef-aa6d-67f785db5868\") " pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:44 crc kubenswrapper[4787]: I0127 07:57:44.091013 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:44 crc kubenswrapper[4787]: I0127 07:57:44.164652 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdpsb" Jan 27 07:57:44 crc kubenswrapper[4787]: I0127 07:57:44.165787 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjx52" Jan 27 07:57:44 crc kubenswrapper[4787]: I0127 07:57:44.334233 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ln8k8"] Jan 27 07:57:45 crc kubenswrapper[4787]: I0127 07:57:45.128412 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" event={"ID":"c2a764dc-d301-4cef-aa6d-67f785db5868","Type":"ContainerStarted","Data":"cc31b61aca8e41f56ff29d382be7ae8a36f77b0b3ed3f42f7d94539de9163981"} Jan 27 07:57:45 crc kubenswrapper[4787]: I0127 07:57:45.129160 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" event={"ID":"c2a764dc-d301-4cef-aa6d-67f785db5868","Type":"ContainerStarted","Data":"8acde4aaed8887ff89bc1532c729baf24ed884982074e088d86cc2476e606b1d"} Jan 27 07:57:45 crc kubenswrapper[4787]: I0127 07:57:45.156729 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" podStartSLOduration=2.156708876 podStartE2EDuration="2.156708876s" podCreationTimestamp="2026-01-27 07:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:57:45.150896815 +0000 UTC m=+370.803252327" watchObservedRunningTime="2026-01-27 07:57:45.156708876 +0000 UTC m=+370.809064358" Jan 27 07:57:45 crc kubenswrapper[4787]: I0127 07:57:45.616941 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:45 crc kubenswrapper[4787]: I0127 07:57:45.617016 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:45 crc kubenswrapper[4787]: I0127 07:57:45.807942 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:45 crc kubenswrapper[4787]: I0127 07:57:45.808038 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:45 crc kubenswrapper[4787]: I0127 07:57:45.857965 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:46 crc kubenswrapper[4787]: I0127 07:57:46.134473 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:57:46 crc kubenswrapper[4787]: I0127 07:57:46.188449 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bjssk" Jan 27 07:57:46 crc kubenswrapper[4787]: I0127 07:57:46.665381 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bz6z" podUID="3c2c8464-ea72-46ca-a23d-ae23e6617ce8" containerName="registry-server" probeResult="failure" output=< Jan 27 07:57:46 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Jan 27 07:57:46 crc kubenswrapper[4787]: > Jan 27 07:57:52 crc kubenswrapper[4787]: I0127 07:57:52.822855 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:57:52 crc kubenswrapper[4787]: I0127 07:57:52.823740 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:57:55 crc kubenswrapper[4787]: I0127 07:57:55.661087 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:57:55 crc kubenswrapper[4787]: I0127 07:57:55.714863 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8bz6z" Jan 27 07:58:04 crc kubenswrapper[4787]: I0127 07:58:04.099276 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ln8k8" Jan 27 07:58:04 crc kubenswrapper[4787]: I0127 07:58:04.163633 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d44bj"] Jan 27 07:58:22 crc kubenswrapper[4787]: I0127 07:58:22.823731 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:58:22 crc kubenswrapper[4787]: I0127 07:58:22.824979 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.208682 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" podUID="5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" containerName="registry" containerID="cri-o://661df021152bc7f98a651ccf560f2bb03e44a6e4242d1b20ac64c7c4a614aaf1" gracePeriod=30 Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.424526 4787 generic.go:334] "Generic (PLEG): container finished" podID="5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" containerID="661df021152bc7f98a651ccf560f2bb03e44a6e4242d1b20ac64c7c4a614aaf1" exitCode=0 Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.424620 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" event={"ID":"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0","Type":"ContainerDied","Data":"661df021152bc7f98a651ccf560f2bb03e44a6e4242d1b20ac64c7c4a614aaf1"} Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.564212 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.656697 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-tls\") pod \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.656751 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-bound-sa-token\") pod \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.656985 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.657031 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-installation-pull-secrets\") pod \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.657079 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd9pb\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-kube-api-access-sd9pb\") pod \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.657123 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-certificates\") pod \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.657171 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-trusted-ca\") pod \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.657237 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-ca-trust-extracted\") pod \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\" (UID: \"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0\") " Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.658204 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.658372 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.664739 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-kube-api-access-sd9pb" (OuterVolumeSpecName: "kube-api-access-sd9pb") pod "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0"). InnerVolumeSpecName "kube-api-access-sd9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.664737 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.665181 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.666310 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.671233 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.676160 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" (UID: "5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.758299 4787 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.758342 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.758394 4787 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.758409 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd9pb\" (UniqueName: \"kubernetes.io/projected/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-kube-api-access-sd9pb\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.758420 4787 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.758430 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:29 crc kubenswrapper[4787]: I0127 07:58:29.758440 4787 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 07:58:30 crc kubenswrapper[4787]: I0127 07:58:30.435522 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" event={"ID":"5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0","Type":"ContainerDied","Data":"f164a16895fc4826be32f420da3768fd0f293a4a904dce0ed637757e1fbcf513"} Jan 27 07:58:30 crc kubenswrapper[4787]: I0127 07:58:30.436838 4787 scope.go:117] "RemoveContainer" containerID="661df021152bc7f98a651ccf560f2bb03e44a6e4242d1b20ac64c7c4a614aaf1" Jan 27 07:58:30 crc kubenswrapper[4787]: I0127 07:58:30.435744 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-d44bj" Jan 27 07:58:30 crc kubenswrapper[4787]: I0127 07:58:30.487031 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d44bj"] Jan 27 07:58:30 crc kubenswrapper[4787]: I0127 07:58:30.494132 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d44bj"] Jan 27 07:58:31 crc kubenswrapper[4787]: I0127 07:58:31.090261 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" path="/var/lib/kubelet/pods/5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0/volumes" Jan 27 07:58:52 crc kubenswrapper[4787]: I0127 07:58:52.823923 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:58:52 crc kubenswrapper[4787]: I0127 07:58:52.825091 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:58:52 crc kubenswrapper[4787]: I0127 07:58:52.825179 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 07:58:52 crc kubenswrapper[4787]: I0127 07:58:52.827102 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef7962ef772271e4959fec84d91ffb697a93484f02f3f1aa59cfda3d789d7c84"} pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:58:52 crc kubenswrapper[4787]: I0127 07:58:52.827311 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" containerID="cri-o://ef7962ef772271e4959fec84d91ffb697a93484f02f3f1aa59cfda3d789d7c84" gracePeriod=600 Jan 27 07:58:53 crc kubenswrapper[4787]: I0127 07:58:53.596283 4787 generic.go:334] "Generic (PLEG): container finished" podID="f051e184-acac-47cf-9e04-9df648288715" containerID="ef7962ef772271e4959fec84d91ffb697a93484f02f3f1aa59cfda3d789d7c84" exitCode=0 Jan 27 07:58:53 crc kubenswrapper[4787]: I0127 07:58:53.596364 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerDied","Data":"ef7962ef772271e4959fec84d91ffb697a93484f02f3f1aa59cfda3d789d7c84"} Jan 27 07:58:53 crc kubenswrapper[4787]: I0127 07:58:53.597180 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"e4c1a6fd72ac92bada26c4a251440b894a7362ee02ad76996c3668ab0c3d7b09"} Jan 27 07:58:53 crc kubenswrapper[4787]: I0127 07:58:53.597213 4787 scope.go:117] "RemoveContainer" containerID="8829867d627dfed47b988a93a9ac3e381c0bf59794b65f8e1c1e821838477748" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.181191 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn"] Jan 27 08:00:00 crc kubenswrapper[4787]: E0127 08:00:00.182183 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" containerName="registry" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.182198 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" containerName="registry" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.182288 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd6837c-65d0-40c3-9cbc-d05d4f1e93d0" containerName="registry" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.182741 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.185186 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.185537 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.196453 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn"] Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.208959 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-config-volume\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.209468 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twzb\" (UniqueName: \"kubernetes.io/projected/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-kube-api-access-4twzb\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.209540 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-secret-volume\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.310292 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-secret-volume\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.310375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-config-volume\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.310424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4twzb\" (UniqueName: \"kubernetes.io/projected/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-kube-api-access-4twzb\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.311773 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-config-volume\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.318182 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-secret-volume\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.326783 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4twzb\" (UniqueName: \"kubernetes.io/projected/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-kube-api-access-4twzb\") pod \"collect-profiles-29491680-tghsn\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.508331 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:00 crc kubenswrapper[4787]: I0127 08:00:00.723720 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn"] Jan 27 08:00:01 crc kubenswrapper[4787]: I0127 08:00:01.369259 4787 generic.go:334] "Generic (PLEG): container finished" podID="0f2fa041-8cee-4085-be76-4eb4b5e9cdb3" containerID="1be0ed9c2bc25caff91582d8baee05ebb7c0bfff01b2f985dd4895d1df918858" exitCode=0 Jan 27 08:00:01 crc kubenswrapper[4787]: I0127 08:00:01.369397 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" event={"ID":"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3","Type":"ContainerDied","Data":"1be0ed9c2bc25caff91582d8baee05ebb7c0bfff01b2f985dd4895d1df918858"} Jan 27 08:00:01 crc kubenswrapper[4787]: I0127 08:00:01.369793 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" event={"ID":"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3","Type":"ContainerStarted","Data":"bd32da43f356f5ad22f56645e3c597f5643d85a114e4644224475a6430879cd2"} Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.621248 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.747620 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-secret-volume\") pod \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.747765 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-config-volume\") pod \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.747795 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4twzb\" (UniqueName: \"kubernetes.io/projected/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-kube-api-access-4twzb\") pod \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\" (UID: \"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3\") " Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.748932 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f2fa041-8cee-4085-be76-4eb4b5e9cdb3" (UID: "0f2fa041-8cee-4085-be76-4eb4b5e9cdb3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.753566 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-kube-api-access-4twzb" (OuterVolumeSpecName: "kube-api-access-4twzb") pod "0f2fa041-8cee-4085-be76-4eb4b5e9cdb3" (UID: "0f2fa041-8cee-4085-be76-4eb4b5e9cdb3"). InnerVolumeSpecName "kube-api-access-4twzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.753595 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f2fa041-8cee-4085-be76-4eb4b5e9cdb3" (UID: "0f2fa041-8cee-4085-be76-4eb4b5e9cdb3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.849089 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.849161 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:02 crc kubenswrapper[4787]: I0127 08:00:02.849192 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4twzb\" (UniqueName: \"kubernetes.io/projected/0f2fa041-8cee-4085-be76-4eb4b5e9cdb3-kube-api-access-4twzb\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:03 crc kubenswrapper[4787]: I0127 08:00:03.383814 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" event={"ID":"0f2fa041-8cee-4085-be76-4eb4b5e9cdb3","Type":"ContainerDied","Data":"bd32da43f356f5ad22f56645e3c597f5643d85a114e4644224475a6430879cd2"} Jan 27 08:00:03 crc kubenswrapper[4787]: I0127 08:00:03.383871 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd32da43f356f5ad22f56645e3c597f5643d85a114e4644224475a6430879cd2" Jan 27 08:00:03 crc kubenswrapper[4787]: I0127 08:00:03.383943 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-tghsn" Jan 27 08:01:22 crc kubenswrapper[4787]: I0127 08:01:22.822826 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:01:22 crc kubenswrapper[4787]: I0127 08:01:22.824690 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:01:52 crc kubenswrapper[4787]: I0127 08:01:52.823414 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:01:52 crc kubenswrapper[4787]: I0127 08:01:52.824334 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:02:22 crc kubenswrapper[4787]: I0127 08:02:22.823967 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:02:22 crc kubenswrapper[4787]: I0127 08:02:22.824920 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:02:22 crc kubenswrapper[4787]: I0127 08:02:22.825031 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 08:02:22 crc kubenswrapper[4787]: I0127 08:02:22.826326 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4c1a6fd72ac92bada26c4a251440b894a7362ee02ad76996c3668ab0c3d7b09"} pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:02:22 crc kubenswrapper[4787]: I0127 08:02:22.826460 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" containerID="cri-o://e4c1a6fd72ac92bada26c4a251440b894a7362ee02ad76996c3668ab0c3d7b09" gracePeriod=600 Jan 27 08:02:23 crc kubenswrapper[4787]: I0127 08:02:23.398126 4787 generic.go:334] "Generic (PLEG): container finished" podID="f051e184-acac-47cf-9e04-9df648288715" containerID="e4c1a6fd72ac92bada26c4a251440b894a7362ee02ad76996c3668ab0c3d7b09" exitCode=0 Jan 27 08:02:23 crc kubenswrapper[4787]: I0127 08:02:23.398309 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerDied","Data":"e4c1a6fd72ac92bada26c4a251440b894a7362ee02ad76996c3668ab0c3d7b09"} Jan 27 08:02:23 crc kubenswrapper[4787]: I0127 08:02:23.398675 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"76c11a5da51cedd24f4d664015e61242e532ce42823eb519b69423acc27d454d"} Jan 27 08:02:23 crc kubenswrapper[4787]: I0127 08:02:23.398703 4787 scope.go:117] "RemoveContainer" containerID="ef7962ef772271e4959fec84d91ffb697a93484f02f3f1aa59cfda3d789d7c84" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.183959 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v"] Jan 27 08:03:18 crc kubenswrapper[4787]: E0127 08:03:18.184955 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2fa041-8cee-4085-be76-4eb4b5e9cdb3" containerName="collect-profiles" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.184970 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2fa041-8cee-4085-be76-4eb4b5e9cdb3" containerName="collect-profiles" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.185101 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2fa041-8cee-4085-be76-4eb4b5e9cdb3" containerName="collect-profiles" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.185894 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.188011 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.199270 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v"] Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.320346 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btqb4\" (UniqueName: \"kubernetes.io/projected/b105de51-099b-4b81-b99d-6aa63d8821ae-kube-api-access-btqb4\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.320404 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.320569 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.422781 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btqb4\" (UniqueName: \"kubernetes.io/projected/b105de51-099b-4b81-b99d-6aa63d8821ae-kube-api-access-btqb4\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.422885 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.422952 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.424086 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.424146 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.464605 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btqb4\" (UniqueName: \"kubernetes.io/projected/b105de51-099b-4b81-b99d-6aa63d8821ae-kube-api-access-btqb4\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.503338 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:18 crc kubenswrapper[4787]: I0127 08:03:18.740846 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v"] Jan 27 08:03:18 crc kubenswrapper[4787]: W0127 08:03:18.757240 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb105de51_099b_4b81_b99d_6aa63d8821ae.slice/crio-75554e486da5acb10dc400cbefb82d6888492aa052b9eeab625e4178740f5bf7 WatchSource:0}: Error finding container 75554e486da5acb10dc400cbefb82d6888492aa052b9eeab625e4178740f5bf7: Status 404 returned error can't find the container with id 75554e486da5acb10dc400cbefb82d6888492aa052b9eeab625e4178740f5bf7 Jan 27 08:03:19 crc kubenswrapper[4787]: I0127 08:03:19.788964 4787 generic.go:334] "Generic (PLEG): container finished" podID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerID="a8b8f2e80477ccecfdc1b7784d1b38fffdf7c2947da74d1752084526cd690013" exitCode=0 Jan 27 08:03:19 crc kubenswrapper[4787]: I0127 08:03:19.789046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" event={"ID":"b105de51-099b-4b81-b99d-6aa63d8821ae","Type":"ContainerDied","Data":"a8b8f2e80477ccecfdc1b7784d1b38fffdf7c2947da74d1752084526cd690013"} Jan 27 08:03:19 crc kubenswrapper[4787]: I0127 08:03:19.789441 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" event={"ID":"b105de51-099b-4b81-b99d-6aa63d8821ae","Type":"ContainerStarted","Data":"75554e486da5acb10dc400cbefb82d6888492aa052b9eeab625e4178740f5bf7"} Jan 27 08:03:19 crc kubenswrapper[4787]: I0127 08:03:19.793993 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:03:21 crc kubenswrapper[4787]: I0127 08:03:21.813171 4787 generic.go:334] "Generic (PLEG): container finished" podID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerID="e7193af437019d55be4443c0b7d3c5b45c81b64b96676e31ece60553f5a9a993" exitCode=0 Jan 27 08:03:21 crc kubenswrapper[4787]: I0127 08:03:21.813758 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" event={"ID":"b105de51-099b-4b81-b99d-6aa63d8821ae","Type":"ContainerDied","Data":"e7193af437019d55be4443c0b7d3c5b45c81b64b96676e31ece60553f5a9a993"} Jan 27 08:03:22 crc kubenswrapper[4787]: I0127 08:03:22.826500 4787 generic.go:334] "Generic (PLEG): container finished" podID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerID="03b0d8be59b6c6084adde90eb5992df7e4072a74d746be53a75159504f1c8c26" exitCode=0 Jan 27 08:03:22 crc kubenswrapper[4787]: I0127 08:03:22.826641 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" event={"ID":"b105de51-099b-4b81-b99d-6aa63d8821ae","Type":"ContainerDied","Data":"03b0d8be59b6c6084adde90eb5992df7e4072a74d746be53a75159504f1c8c26"} Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.089790 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.207045 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-bundle\") pod \"b105de51-099b-4b81-b99d-6aa63d8821ae\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.207099 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-util\") pod \"b105de51-099b-4b81-b99d-6aa63d8821ae\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.207253 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btqb4\" (UniqueName: \"kubernetes.io/projected/b105de51-099b-4b81-b99d-6aa63d8821ae-kube-api-access-btqb4\") pod \"b105de51-099b-4b81-b99d-6aa63d8821ae\" (UID: \"b105de51-099b-4b81-b99d-6aa63d8821ae\") " Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.208345 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-bundle" (OuterVolumeSpecName: "bundle") pod "b105de51-099b-4b81-b99d-6aa63d8821ae" (UID: "b105de51-099b-4b81-b99d-6aa63d8821ae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.221376 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-util" (OuterVolumeSpecName: "util") pod "b105de51-099b-4b81-b99d-6aa63d8821ae" (UID: "b105de51-099b-4b81-b99d-6aa63d8821ae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.224970 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b105de51-099b-4b81-b99d-6aa63d8821ae-kube-api-access-btqb4" (OuterVolumeSpecName: "kube-api-access-btqb4") pod "b105de51-099b-4b81-b99d-6aa63d8821ae" (UID: "b105de51-099b-4b81-b99d-6aa63d8821ae"). InnerVolumeSpecName "kube-api-access-btqb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.308894 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.308934 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b105de51-099b-4b81-b99d-6aa63d8821ae-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.308944 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btqb4\" (UniqueName: \"kubernetes.io/projected/b105de51-099b-4b81-b99d-6aa63d8821ae-kube-api-access-btqb4\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.841781 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" event={"ID":"b105de51-099b-4b81-b99d-6aa63d8821ae","Type":"ContainerDied","Data":"75554e486da5acb10dc400cbefb82d6888492aa052b9eeab625e4178740f5bf7"} Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.841833 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75554e486da5acb10dc400cbefb82d6888492aa052b9eeab625e4178740f5bf7" Jan 27 08:03:24 crc kubenswrapper[4787]: I0127 08:03:24.841869 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.817275 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zxwfx"] Jan 27 08:03:26 crc kubenswrapper[4787]: E0127 08:03:26.817534 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerName="extract" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.817572 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerName="extract" Jan 27 08:03:26 crc kubenswrapper[4787]: E0127 08:03:26.817589 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerName="util" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.817596 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerName="util" Jan 27 08:03:26 crc kubenswrapper[4787]: E0127 08:03:26.817611 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerName="pull" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.817618 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerName="pull" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.817729 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b105de51-099b-4b81-b99d-6aa63d8821ae" containerName="extract" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.818180 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zxwfx" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.820180 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.821306 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dcpzr" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.822328 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.835370 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zxwfx"] Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.847015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x6g5\" (UniqueName: \"kubernetes.io/projected/d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e-kube-api-access-9x6g5\") pod \"nmstate-operator-646758c888-zxwfx\" (UID: \"d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e\") " pod="openshift-nmstate/nmstate-operator-646758c888-zxwfx" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.947865 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6g5\" (UniqueName: \"kubernetes.io/projected/d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e-kube-api-access-9x6g5\") pod \"nmstate-operator-646758c888-zxwfx\" (UID: \"d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e\") " pod="openshift-nmstate/nmstate-operator-646758c888-zxwfx" Jan 27 08:03:26 crc kubenswrapper[4787]: I0127 08:03:26.967438 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6g5\" (UniqueName: \"kubernetes.io/projected/d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e-kube-api-access-9x6g5\") pod \"nmstate-operator-646758c888-zxwfx\" (UID: \"d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e\") " pod="openshift-nmstate/nmstate-operator-646758c888-zxwfx" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.136802 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zxwfx" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.351618 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zxwfx"] Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.435923 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7642m"] Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.436326 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovn-controller" containerID="cri-o://fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c" gracePeriod=30 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.436355 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="nbdb" containerID="cri-o://108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d" gracePeriod=30 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.436456 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kube-rbac-proxy-node" containerID="cri-o://b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80" gracePeriod=30 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.436444 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="sbdb" containerID="cri-o://d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413" gracePeriod=30 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.436468 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovn-acl-logging" containerID="cri-o://fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c" gracePeriod=30 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.436617 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f" gracePeriod=30 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.436521 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="northd" containerID="cri-o://0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8" gracePeriod=30 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.484868 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" containerID="cri-o://be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c" gracePeriod=30 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.727878 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/3.log" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.730512 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovn-acl-logging/0.log" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.731099 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovn-controller/0.log" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.731638 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.788674 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x2w6d"] Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.788936 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.788951 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.788962 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.788968 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.788976 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="sbdb" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.788982 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="sbdb" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.788996 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovn-acl-logging" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789001 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovn-acl-logging" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789014 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="northd" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789021 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="northd" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789030 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789037 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789043 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kube-rbac-proxy-node" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789050 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kube-rbac-proxy-node" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789058 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789065 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789081 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovn-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789089 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovn-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789097 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789105 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789114 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kubecfg-setup" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789121 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kubecfg-setup" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789130 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="nbdb" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789137 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="nbdb" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789259 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="nbdb" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789278 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789287 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovn-acl-logging" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789298 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovn-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789310 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kube-rbac-proxy-node" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789318 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789326 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789332 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="sbdb" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789340 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789346 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789353 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="northd" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.789452 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789459 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.789568 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa44405c-042c-485a-ab6c-912dcd377751" containerName="ovnkube-controller" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.791120 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858207 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-openvswitch\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858250 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-log-socket\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858307 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-netns\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858336 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-var-lib-openvswitch\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858361 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858400 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-config\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858428 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-ovn-kubernetes\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858462 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-kubelet\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858488 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-systemd\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858457 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858486 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-log-socket" (OuterVolumeSpecName: "log-socket") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858534 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858516 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj4tn\" (UniqueName: \"kubernetes.io/projected/fa44405c-042c-485a-ab6c-912dcd377751-kube-api-access-tj4tn\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858473 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858512 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858704 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-node-log\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858781 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-ovn\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858831 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-slash\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858855 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-bin\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858873 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-netd\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858942 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa44405c-042c-485a-ab6c-912dcd377751-ovn-node-metrics-cert\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.858993 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-etc-openvswitch\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859022 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-systemd-units\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859068 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-script-lib\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859095 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859120 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-env-overrides\") pod \"fa44405c-042c-485a-ab6c-912dcd377751\" (UID: \"fa44405c-042c-485a-ab6c-912dcd377751\") " Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859116 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859134 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859168 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859177 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-node-log" (OuterVolumeSpecName: "node-log") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859201 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859206 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859225 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-slash" (OuterVolumeSpecName: "host-slash") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859254 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859275 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859568 4787 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859582 4787 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859591 4787 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859600 4787 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859610 4787 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859621 4787 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859629 4787 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859638 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859646 4787 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859641 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859663 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859655 4787 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859723 4787 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859750 4787 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859768 4787 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859781 4787 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.859794 4787 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.864409 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zxwfx" event={"ID":"d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e","Type":"ContainerStarted","Data":"3ef4563109c5b497ef7316fdd9db2d3de3a355d5acbde2c8650dd5a12d20907f"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.865159 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa44405c-042c-485a-ab6c-912dcd377751-kube-api-access-tj4tn" (OuterVolumeSpecName: "kube-api-access-tj4tn") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "kube-api-access-tj4tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.867198 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa44405c-042c-485a-ab6c-912dcd377751-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.867694 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/2.log" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.868231 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/1.log" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.868290 4787 generic.go:334] "Generic (PLEG): container finished" podID="e6f78168-0b0d-464d-b1c7-00bb9a69c0d1" containerID="30eb607bd3c5a74648f4c24cbbf8118159296bb63fb30a76b991d8fdb94cb16a" exitCode=2 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.868376 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqjpz" event={"ID":"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1","Type":"ContainerDied","Data":"30eb607bd3c5a74648f4c24cbbf8118159296bb63fb30a76b991d8fdb94cb16a"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.868443 4787 scope.go:117] "RemoveContainer" containerID="4924d9f32dbc90c38c70d18db5a32bdabb76d288c34457331804829e451cf169" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.868914 4787 scope.go:117] "RemoveContainer" containerID="30eb607bd3c5a74648f4c24cbbf8118159296bb63fb30a76b991d8fdb94cb16a" Jan 27 08:03:27 crc kubenswrapper[4787]: E0127 08:03:27.869095 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rqjpz_openshift-multus(e6f78168-0b0d-464d-b1c7-00bb9a69c0d1)\"" pod="openshift-multus/multus-rqjpz" podUID="e6f78168-0b0d-464d-b1c7-00bb9a69c0d1" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.873724 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fa44405c-042c-485a-ab6c-912dcd377751" (UID: "fa44405c-042c-485a-ab6c-912dcd377751"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.874495 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovnkube-controller/3.log" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.878926 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovn-acl-logging/0.log" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880072 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7642m_fa44405c-042c-485a-ab6c-912dcd377751/ovn-controller/0.log" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880827 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c" exitCode=0 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880854 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413" exitCode=0 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880861 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d" exitCode=0 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880869 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8" exitCode=0 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880876 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f" exitCode=0 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880883 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80" exitCode=0 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880901 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c" exitCode=143 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880909 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa44405c-042c-485a-ab6c-912dcd377751" containerID="fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c" exitCode=143 Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880868 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880948 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880957 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.880999 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881008 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881021 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881033 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881039 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881044 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881049 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881056 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881061 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881066 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881072 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881077 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881084 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881092 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881098 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881103 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881108 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881113 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881118 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881126 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881132 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881137 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881142 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881149 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881157 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881163 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881169 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881174 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881179 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881184 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881189 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881194 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881199 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881205 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881211 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7642m" event={"ID":"fa44405c-042c-485a-ab6c-912dcd377751","Type":"ContainerDied","Data":"40ba2420aaaf3b3cc8595bbc5c0c66623c4fd6910892664a858be15544316d35"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881219 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881225 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881230 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881235 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881240 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881246 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881251 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881256 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881261 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.881266 4787 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.897399 4787 scope.go:117] "RemoveContainer" containerID="be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.924002 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7642m"] Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.928766 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.932570 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7642m"] Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.950204 4787 scope.go:117] "RemoveContainer" containerID="d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.960859 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-var-lib-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.960922 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovnkube-script-lib\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.960957 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-kubelet\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961043 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-etc-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961123 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-run-netns\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961178 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-systemd-units\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961210 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zrj4\" (UniqueName: \"kubernetes.io/projected/053b65b3-b8e5-490f-81b8-63a5beea5f16-kube-api-access-4zrj4\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961231 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovnkube-config\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961251 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-systemd\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961273 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-node-log\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961295 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-cni-netd\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961319 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961350 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovn-node-metrics-cert\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961376 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-ovn\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961400 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-run-ovn-kubernetes\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961454 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-cni-bin\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961481 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-slash\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961502 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-log-socket\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961531 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-env-overrides\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961719 4787 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa44405c-042c-485a-ab6c-912dcd377751-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961741 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj4tn\" (UniqueName: \"kubernetes.io/projected/fa44405c-042c-485a-ab6c-912dcd377751-kube-api-access-tj4tn\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961757 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa44405c-042c-485a-ab6c-912dcd377751-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961771 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.961783 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa44405c-042c-485a-ab6c-912dcd377751-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:27 crc kubenswrapper[4787]: I0127 08:03:27.967617 4787 scope.go:117] "RemoveContainer" containerID="108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.016393 4787 scope.go:117] "RemoveContainer" containerID="0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.049988 4787 scope.go:117] "RemoveContainer" containerID="2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063303 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063386 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-run-ovn-kubernetes\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063428 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063464 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-cni-bin\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063433 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-cni-bin\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063490 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-run-ovn-kubernetes\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063504 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-slash\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063527 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-log-socket\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063566 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-env-overrides\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063604 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-var-lib-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063628 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovnkube-script-lib\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-kubelet\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063674 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-etc-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063702 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-run-netns\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063750 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-systemd-units\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063779 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovnkube-config\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063802 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zrj4\" (UniqueName: \"kubernetes.io/projected/053b65b3-b8e5-490f-81b8-63a5beea5f16-kube-api-access-4zrj4\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063824 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-systemd\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-node-log\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063866 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-cni-netd\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063910 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063952 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovn-node-metrics-cert\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.063979 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-ovn\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.064044 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-ovn\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.064075 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-slash\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.064101 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-log-socket\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.064813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovnkube-config\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.064861 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-var-lib-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065017 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-run-systemd\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065060 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-node-log\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065104 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-cni-netd\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065140 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065421 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovnkube-script-lib\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065463 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-kubelet\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065492 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-etc-openvswitch\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065522 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-host-run-netns\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.065688 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/053b65b3-b8e5-490f-81b8-63a5beea5f16-systemd-units\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.066951 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/053b65b3-b8e5-490f-81b8-63a5beea5f16-env-overrides\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.067701 4787 scope.go:117] "RemoveContainer" containerID="b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.070430 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/053b65b3-b8e5-490f-81b8-63a5beea5f16-ovn-node-metrics-cert\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.081457 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zrj4\" (UniqueName: \"kubernetes.io/projected/053b65b3-b8e5-490f-81b8-63a5beea5f16-kube-api-access-4zrj4\") pod \"ovnkube-node-x2w6d\" (UID: \"053b65b3-b8e5-490f-81b8-63a5beea5f16\") " pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.106146 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.123812 4787 scope.go:117] "RemoveContainer" containerID="fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c" Jan 27 08:03:28 crc kubenswrapper[4787]: W0127 08:03:28.133301 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod053b65b3_b8e5_490f_81b8_63a5beea5f16.slice/crio-cbc85ce561f2b9cb45c3f65310a13d497be58ceb62f9eb96fd12a7bbaebac168 WatchSource:0}: Error finding container cbc85ce561f2b9cb45c3f65310a13d497be58ceb62f9eb96fd12a7bbaebac168: Status 404 returned error can't find the container with id cbc85ce561f2b9cb45c3f65310a13d497be58ceb62f9eb96fd12a7bbaebac168 Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.155154 4787 scope.go:117] "RemoveContainer" containerID="fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.177062 4787 scope.go:117] "RemoveContainer" containerID="199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.201603 4787 scope.go:117] "RemoveContainer" containerID="be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.202576 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": container with ID starting with be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c not found: ID does not exist" containerID="be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.202637 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} err="failed to get container status \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": rpc error: code = NotFound desc = could not find container \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": container with ID starting with be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.202681 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.203325 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": container with ID starting with 6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637 not found: ID does not exist" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.203373 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} err="failed to get container status \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": rpc error: code = NotFound desc = could not find container \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": container with ID starting with 6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.203405 4787 scope.go:117] "RemoveContainer" containerID="d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.203828 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": container with ID starting with d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413 not found: ID does not exist" containerID="d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.203885 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} err="failed to get container status \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": rpc error: code = NotFound desc = could not find container \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": container with ID starting with d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.203920 4787 scope.go:117] "RemoveContainer" containerID="108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.204342 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": container with ID starting with 108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d not found: ID does not exist" containerID="108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.204434 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} err="failed to get container status \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": rpc error: code = NotFound desc = could not find container \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": container with ID starting with 108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.204523 4787 scope.go:117] "RemoveContainer" containerID="0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.204935 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": container with ID starting with 0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8 not found: ID does not exist" containerID="0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.204967 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} err="failed to get container status \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": rpc error: code = NotFound desc = could not find container \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": container with ID starting with 0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.204994 4787 scope.go:117] "RemoveContainer" containerID="2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.205332 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": container with ID starting with 2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f not found: ID does not exist" containerID="2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.205372 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} err="failed to get container status \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": rpc error: code = NotFound desc = could not find container \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": container with ID starting with 2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.205402 4787 scope.go:117] "RemoveContainer" containerID="b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.205737 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": container with ID starting with b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80 not found: ID does not exist" containerID="b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.205830 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} err="failed to get container status \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": rpc error: code = NotFound desc = could not find container \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": container with ID starting with b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.205909 4787 scope.go:117] "RemoveContainer" containerID="fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.206306 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": container with ID starting with fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c not found: ID does not exist" containerID="fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.206378 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} err="failed to get container status \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": rpc error: code = NotFound desc = could not find container \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": container with ID starting with fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.206448 4787 scope.go:117] "RemoveContainer" containerID="fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.207219 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": container with ID starting with fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c not found: ID does not exist" containerID="fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.207261 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} err="failed to get container status \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": rpc error: code = NotFound desc = could not find container \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": container with ID starting with fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.207290 4787 scope.go:117] "RemoveContainer" containerID="199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b" Jan 27 08:03:28 crc kubenswrapper[4787]: E0127 08:03:28.207820 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": container with ID starting with 199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b not found: ID does not exist" containerID="199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.207921 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} err="failed to get container status \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": rpc error: code = NotFound desc = could not find container \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": container with ID starting with 199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.207942 4787 scope.go:117] "RemoveContainer" containerID="be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.208362 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} err="failed to get container status \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": rpc error: code = NotFound desc = could not find container \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": container with ID starting with be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.208387 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.208661 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} err="failed to get container status \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": rpc error: code = NotFound desc = could not find container \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": container with ID starting with 6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.208696 4787 scope.go:117] "RemoveContainer" containerID="d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.209221 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} err="failed to get container status \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": rpc error: code = NotFound desc = could not find container \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": container with ID starting with d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.209355 4787 scope.go:117] "RemoveContainer" containerID="108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.209883 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} err="failed to get container status \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": rpc error: code = NotFound desc = could not find container \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": container with ID starting with 108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.209908 4787 scope.go:117] "RemoveContainer" containerID="0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.210196 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} err="failed to get container status \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": rpc error: code = NotFound desc = could not find container \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": container with ID starting with 0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.210277 4787 scope.go:117] "RemoveContainer" containerID="2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.210609 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} err="failed to get container status \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": rpc error: code = NotFound desc = could not find container \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": container with ID starting with 2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.210630 4787 scope.go:117] "RemoveContainer" containerID="b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.210932 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} err="failed to get container status \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": rpc error: code = NotFound desc = could not find container \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": container with ID starting with b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.210952 4787 scope.go:117] "RemoveContainer" containerID="fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.211189 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} err="failed to get container status \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": rpc error: code = NotFound desc = could not find container \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": container with ID starting with fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.211266 4787 scope.go:117] "RemoveContainer" containerID="fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.211639 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} err="failed to get container status \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": rpc error: code = NotFound desc = could not find container \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": container with ID starting with fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.211720 4787 scope.go:117] "RemoveContainer" containerID="199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.212144 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} err="failed to get container status \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": rpc error: code = NotFound desc = could not find container \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": container with ID starting with 199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.212168 4787 scope.go:117] "RemoveContainer" containerID="be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.212496 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} err="failed to get container status \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": rpc error: code = NotFound desc = could not find container \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": container with ID starting with be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.212609 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.213292 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} err="failed to get container status \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": rpc error: code = NotFound desc = could not find container \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": container with ID starting with 6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.213391 4787 scope.go:117] "RemoveContainer" containerID="d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.213678 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} err="failed to get container status \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": rpc error: code = NotFound desc = could not find container \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": container with ID starting with d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.213747 4787 scope.go:117] "RemoveContainer" containerID="108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.214233 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} err="failed to get container status \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": rpc error: code = NotFound desc = could not find container \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": container with ID starting with 108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.214327 4787 scope.go:117] "RemoveContainer" containerID="0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.214785 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} err="failed to get container status \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": rpc error: code = NotFound desc = could not find container \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": container with ID starting with 0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.214821 4787 scope.go:117] "RemoveContainer" containerID="2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.215199 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} err="failed to get container status \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": rpc error: code = NotFound desc = could not find container \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": container with ID starting with 2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.215235 4787 scope.go:117] "RemoveContainer" containerID="b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.215598 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} err="failed to get container status \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": rpc error: code = NotFound desc = could not find container \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": container with ID starting with b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.215671 4787 scope.go:117] "RemoveContainer" containerID="fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.215981 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} err="failed to get container status \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": rpc error: code = NotFound desc = could not find container \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": container with ID starting with fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.216050 4787 scope.go:117] "RemoveContainer" containerID="fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.216353 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} err="failed to get container status \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": rpc error: code = NotFound desc = could not find container \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": container with ID starting with fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.216377 4787 scope.go:117] "RemoveContainer" containerID="199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.216749 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} err="failed to get container status \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": rpc error: code = NotFound desc = could not find container \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": container with ID starting with 199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.216855 4787 scope.go:117] "RemoveContainer" containerID="be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.217259 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c"} err="failed to get container status \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": rpc error: code = NotFound desc = could not find container \"be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c\": container with ID starting with be3770d2a2e5b599b3bac6445f4cdedb5531335a40c7e1a5727aa8b323e2394c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.217294 4787 scope.go:117] "RemoveContainer" containerID="6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.217783 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637"} err="failed to get container status \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": rpc error: code = NotFound desc = could not find container \"6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637\": container with ID starting with 6d3ec81938807879c988aabe13c7bf18add843b4295e6a4228405f02b47fd637 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.217807 4787 scope.go:117] "RemoveContainer" containerID="d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.219657 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413"} err="failed to get container status \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": rpc error: code = NotFound desc = could not find container \"d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413\": container with ID starting with d46b5c512f7b14d97a53ce0355a7ceb08370d3e91ccef003b40386a9785df413 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.219680 4787 scope.go:117] "RemoveContainer" containerID="108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.219967 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d"} err="failed to get container status \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": rpc error: code = NotFound desc = could not find container \"108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d\": container with ID starting with 108cdc389c2cbb9baeb538856e70ea1ca2bce5968eb4097700b3e71e5322258d not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.219998 4787 scope.go:117] "RemoveContainer" containerID="0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.220226 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8"} err="failed to get container status \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": rpc error: code = NotFound desc = could not find container \"0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8\": container with ID starting with 0bcc1a1d9f6e5bd0d8cf9b36441460debd839f92291531175eb05db5070136e8 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.220239 4787 scope.go:117] "RemoveContainer" containerID="2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.220514 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f"} err="failed to get container status \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": rpc error: code = NotFound desc = could not find container \"2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f\": container with ID starting with 2abb29a77d0081b70495701e68f0a5a9b80656eab59ae8de10fab1450a1df49f not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.220526 4787 scope.go:117] "RemoveContainer" containerID="b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.220768 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80"} err="failed to get container status \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": rpc error: code = NotFound desc = could not find container \"b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80\": container with ID starting with b5f54aa358ddf7d9fe76fd45c4e3332ba2ef1fa4da77c6f38ae29209c4339a80 not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.220790 4787 scope.go:117] "RemoveContainer" containerID="fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.221062 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c"} err="failed to get container status \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": rpc error: code = NotFound desc = could not find container \"fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c\": container with ID starting with fbc588f712cb0167029fc80f924f52841fb904738bbcf774c53ca15a2642ef9c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.221089 4787 scope.go:117] "RemoveContainer" containerID="fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.221376 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c"} err="failed to get container status \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": rpc error: code = NotFound desc = could not find container \"fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c\": container with ID starting with fc1a27dd4120a4eafd6bfbd908f9fdbc80e9b006afafb509399f2b657b129b2c not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.221401 4787 scope.go:117] "RemoveContainer" containerID="199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.221687 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b"} err="failed to get container status \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": rpc error: code = NotFound desc = could not find container \"199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b\": container with ID starting with 199ad62dee1b7ba9f8af9b2f3fcd77db94e8ba7dbfe2d011610c47358c17126b not found: ID does not exist" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.891922 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/2.log" Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.895460 4787 generic.go:334] "Generic (PLEG): container finished" podID="053b65b3-b8e5-490f-81b8-63a5beea5f16" containerID="e0ba16c2e0ac88fa85c500b2cf38fd301e21168397734a8c121566dbd868ae48" exitCode=0 Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.895618 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerDied","Data":"e0ba16c2e0ac88fa85c500b2cf38fd301e21168397734a8c121566dbd868ae48"} Jan 27 08:03:28 crc kubenswrapper[4787]: I0127 08:03:28.895729 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"cbc85ce561f2b9cb45c3f65310a13d497be58ceb62f9eb96fd12a7bbaebac168"} Jan 27 08:03:29 crc kubenswrapper[4787]: I0127 08:03:29.086950 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa44405c-042c-485a-ab6c-912dcd377751" path="/var/lib/kubelet/pods/fa44405c-042c-485a-ab6c-912dcd377751/volumes" Jan 27 08:03:29 crc kubenswrapper[4787]: I0127 08:03:29.911856 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"2e9bf31768b6d1610948796500dce93498648bdc60d8e6a3f4d9a468a3987a39"} Jan 27 08:03:29 crc kubenswrapper[4787]: I0127 08:03:29.912361 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"2cd2c4a4cdcfc04d19871f2c4d6dfc8d172fed42cf55d86e3f9ccf998253c2bc"} Jan 27 08:03:29 crc kubenswrapper[4787]: I0127 08:03:29.912374 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"37a6d92ab73ded43196df28c8f636311ce3efb1b0983636aa31fd083313bd0e8"} Jan 27 08:03:29 crc kubenswrapper[4787]: I0127 08:03:29.912385 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"f558b98ac626f89919ff17dbf3313cd882c2f0d96d0ce4142944d7ce6a8d3a93"} Jan 27 08:03:29 crc kubenswrapper[4787]: I0127 08:03:29.912394 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"d6b4206fbe98fe487eb1c3c6c665d519fec4b9712ad785f18fe46911a460ef31"} Jan 27 08:03:29 crc kubenswrapper[4787]: I0127 08:03:29.912405 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"f0c0062e46e797c498bec60c4d2132cbfe34e77fa676fe066449be0713fceba2"} Jan 27 08:03:30 crc kubenswrapper[4787]: I0127 08:03:30.920112 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zxwfx" event={"ID":"d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e","Type":"ContainerStarted","Data":"271910d8ad8ae48a5d448b6da8d2f4612a930641077784cbd1093e19461b9e60"} Jan 27 08:03:30 crc kubenswrapper[4787]: I0127 08:03:30.936395 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-zxwfx" podStartSLOduration=1.7790120219999999 podStartE2EDuration="4.936377034s" podCreationTimestamp="2026-01-27 08:03:26 +0000 UTC" firstStartedPulling="2026-01-27 08:03:27.37119966 +0000 UTC m=+713.023555152" lastFinishedPulling="2026-01-27 08:03:30.528564672 +0000 UTC m=+716.180920164" observedRunningTime="2026-01-27 08:03:30.935620134 +0000 UTC m=+716.587975646" watchObservedRunningTime="2026-01-27 08:03:30.936377034 +0000 UTC m=+716.588732526" Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.902674 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b8zkq"] Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.904332 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.906613 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-lnx49" Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.921306 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddbj\" (UniqueName: \"kubernetes.io/projected/86b956d1-8553-4624-8324-f0a65d627e10-kube-api-access-pddbj\") pod \"nmstate-metrics-54757c584b-b8zkq\" (UID: \"86b956d1-8553-4624-8324-f0a65d627e10\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.965253 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk"] Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.966681 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.971026 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.988316 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fjcwr"] Jan 27 08:03:31 crc kubenswrapper[4787]: I0127 08:03:31.989295 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.022807 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pddbj\" (UniqueName: \"kubernetes.io/projected/86b956d1-8553-4624-8324-f0a65d627e10-kube-api-access-pddbj\") pod \"nmstate-metrics-54757c584b-b8zkq\" (UID: \"86b956d1-8553-4624-8324-f0a65d627e10\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.067845 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pddbj\" (UniqueName: \"kubernetes.io/projected/86b956d1-8553-4624-8324-f0a65d627e10-kube-api-access-pddbj\") pod \"nmstate-metrics-54757c584b-b8zkq\" (UID: \"86b956d1-8553-4624-8324-f0a65d627e10\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.076975 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb"] Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.077863 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.080125 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.081984 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.082012 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-kllbj" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.124694 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-dbus-socket\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.125576 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjkk\" (UniqueName: \"kubernetes.io/projected/29bc137f-2121-451f-9684-2b4209b38c32-kube-api-access-fdjkk\") pod \"nmstate-webhook-8474b5b9d8-n6mzk\" (UID: \"29bc137f-2121-451f-9684-2b4209b38c32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.125639 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-ovs-socket\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.125676 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f99rx\" (UniqueName: \"kubernetes.io/projected/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-kube-api-access-f99rx\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.125780 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-nmstate-lock\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.125807 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/29bc137f-2121-451f-9684-2b4209b38c32-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-n6mzk\" (UID: \"29bc137f-2121-451f-9684-2b4209b38c32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.222161 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.226832 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.226903 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f99rx\" (UniqueName: \"kubernetes.io/projected/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-kube-api-access-f99rx\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.226948 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnzzk\" (UniqueName: \"kubernetes.io/projected/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-kube-api-access-gnzzk\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227010 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-nmstate-lock\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227050 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/29bc137f-2121-451f-9684-2b4209b38c32-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-n6mzk\" (UID: \"29bc137f-2121-451f-9684-2b4209b38c32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227087 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227117 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-dbus-socket\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227159 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjkk\" (UniqueName: \"kubernetes.io/projected/29bc137f-2121-451f-9684-2b4209b38c32-kube-api-access-fdjkk\") pod \"nmstate-webhook-8474b5b9d8-n6mzk\" (UID: \"29bc137f-2121-451f-9684-2b4209b38c32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227193 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-ovs-socket\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227282 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-ovs-socket\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227750 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-dbus-socket\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.227980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-nmstate-lock\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.234249 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/29bc137f-2121-451f-9684-2b4209b38c32-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-n6mzk\" (UID: \"29bc137f-2121-451f-9684-2b4209b38c32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.248442 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f99rx\" (UniqueName: \"kubernetes.io/projected/0131f9ae-88a6-41c3-8c76-6b2676d5e87b-kube-api-access-f99rx\") pod \"nmstate-handler-fjcwr\" (UID: \"0131f9ae-88a6-41c3-8c76-6b2676d5e87b\") " pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.257121 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjkk\" (UniqueName: \"kubernetes.io/projected/29bc137f-2121-451f-9684-2b4209b38c32-kube-api-access-fdjkk\") pod \"nmstate-webhook-8474b5b9d8-n6mzk\" (UID: \"29bc137f-2121-451f-9684-2b4209b38c32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.259537 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6956947bc8-fxxct"] Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.260345 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.266281 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(5decd527f71143284767151d9bc5ba44f95628c1b2ff00f96f7d6db9a0fce2d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.266360 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(5decd527f71143284767151d9bc5ba44f95628c1b2ff00f96f7d6db9a0fce2d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.266382 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(5decd527f71143284767151d9bc5ba44f95628c1b2ff00f96f7d6db9a0fce2d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.266433 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-metrics-54757c584b-b8zkq_openshift-nmstate(86b956d1-8553-4624-8324-f0a65d627e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-metrics-54757c584b-b8zkq_openshift-nmstate(86b956d1-8553-4624-8324-f0a65d627e10)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(5decd527f71143284767151d9bc5ba44f95628c1b2ff00f96f7d6db9a0fce2d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" podUID="86b956d1-8553-4624-8324-f0a65d627e10" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.298105 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.309045 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.328096 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.328190 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.328224 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnzzk\" (UniqueName: \"kubernetes.io/projected/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-kube-api-access-gnzzk\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.328440 4787 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.328543 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-plugin-serving-cert podName:51c3dc19-29a5-45ab-a384-fb7be3aa3f53 nodeName:}" failed. No retries permitted until 2026-01-27 08:03:32.8285203 +0000 UTC m=+718.480875792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-7qxzb" (UID: "51c3dc19-29a5-45ab-a384-fb7be3aa3f53") : secret "plugin-serving-cert" not found Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.329121 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.336373 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(c472175cccd40f44f355bbc14fd0278bf140ecf48090f128c219aa14005ad7f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.336444 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(c472175cccd40f44f355bbc14fd0278bf140ecf48090f128c219aa14005ad7f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.336467 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(c472175cccd40f44f355bbc14fd0278bf140ecf48090f128c219aa14005ad7f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.336523 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate(29bc137f-2121-451f-9684-2b4209b38c32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate(29bc137f-2121-451f-9684-2b4209b38c32)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(c472175cccd40f44f355bbc14fd0278bf140ecf48090f128c219aa14005ad7f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" podUID="29bc137f-2121-451f-9684-2b4209b38c32" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.344705 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnzzk\" (UniqueName: \"kubernetes.io/projected/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-kube-api-access-gnzzk\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: W0127 08:03:32.348630 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0131f9ae_88a6_41c3_8c76_6b2676d5e87b.slice/crio-75dab7d4d875e1aff23c44820d9f34431de6bd53b4e8cb58fd290f9fc387faf9 WatchSource:0}: Error finding container 75dab7d4d875e1aff23c44820d9f34431de6bd53b4e8cb58fd290f9fc387faf9: Status 404 returned error can't find the container with id 75dab7d4d875e1aff23c44820d9f34431de6bd53b4e8cb58fd290f9fc387faf9 Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.430156 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-trusted-ca-bundle\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.430244 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq4kw\" (UniqueName: \"kubernetes.io/projected/6b8a422e-b84a-431f-8634-ca1a64aec04e-kube-api-access-jq4kw\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.430330 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-config\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.430594 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-oauth-serving-cert\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.430666 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-service-ca\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.430713 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-serving-cert\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.430748 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-oauth-config\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.531630 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-trusted-ca-bundle\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.531726 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq4kw\" (UniqueName: \"kubernetes.io/projected/6b8a422e-b84a-431f-8634-ca1a64aec04e-kube-api-access-jq4kw\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.531825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-config\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.531895 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-oauth-serving-cert\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.531931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-service-ca\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.531970 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-serving-cert\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.532188 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-oauth-config\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.532937 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-trusted-ca-bundle\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.532996 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-config\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.533395 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-service-ca\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.533464 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b8a422e-b84a-431f-8634-ca1a64aec04e-oauth-serving-cert\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.536684 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-serving-cert\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.537258 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b8a422e-b84a-431f-8634-ca1a64aec04e-console-oauth-config\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.550962 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq4kw\" (UniqueName: \"kubernetes.io/projected/6b8a422e-b84a-431f-8634-ca1a64aec04e-kube-api-access-jq4kw\") pod \"console-6956947bc8-fxxct\" (UID: \"6b8a422e-b84a-431f-8634-ca1a64aec04e\") " pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.605425 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.629821 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(99e6b16e01944593ea407852dae4bdf79ab809e5b4a9bed84bdaa5d51c6bbb31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.629935 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(99e6b16e01944593ea407852dae4bdf79ab809e5b4a9bed84bdaa5d51c6bbb31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.629956 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(99e6b16e01944593ea407852dae4bdf79ab809e5b4a9bed84bdaa5d51c6bbb31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:32 crc kubenswrapper[4787]: E0127 08:03:32.630013 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-6956947bc8-fxxct_openshift-console(6b8a422e-b84a-431f-8634-ca1a64aec04e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-6956947bc8-fxxct_openshift-console(6b8a422e-b84a-431f-8634-ca1a64aec04e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(99e6b16e01944593ea407852dae4bdf79ab809e5b4a9bed84bdaa5d51c6bbb31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-console/console-6956947bc8-fxxct" podUID="6b8a422e-b84a-431f-8634-ca1a64aec04e" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.835949 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.840609 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3dc19-29a5-45ab-a384-fb7be3aa3f53-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7qxzb\" (UID: \"51c3dc19-29a5-45ab-a384-fb7be3aa3f53\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.939896 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fjcwr" event={"ID":"0131f9ae-88a6-41c3-8c76-6b2676d5e87b","Type":"ContainerStarted","Data":"75dab7d4d875e1aff23c44820d9f34431de6bd53b4e8cb58fd290f9fc387faf9"} Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.946215 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"8c7bde2bb5dbf385ed55f577accc9af0aaf39618dcc77eaed6d37c6574946364"} Jan 27 08:03:32 crc kubenswrapper[4787]: I0127 08:03:32.995190 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:33 crc kubenswrapper[4787]: E0127 08:03:33.038686 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(4081e46c154b883502413798d0b5a5b2aabfdfbaa6ec0d2541ef3f74414c93e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:33 crc kubenswrapper[4787]: E0127 08:03:33.038772 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(4081e46c154b883502413798d0b5a5b2aabfdfbaa6ec0d2541ef3f74414c93e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:33 crc kubenswrapper[4787]: E0127 08:03:33.038807 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(4081e46c154b883502413798d0b5a5b2aabfdfbaa6ec0d2541ef3f74414c93e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:33 crc kubenswrapper[4787]: E0127 08:03:33.038864 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate(51c3dc19-29a5-45ab-a384-fb7be3aa3f53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate(51c3dc19-29a5-45ab-a384-fb7be3aa3f53)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(4081e46c154b883502413798d0b5a5b2aabfdfbaa6ec0d2541ef3f74414c93e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" podUID="51c3dc19-29a5-45ab-a384-fb7be3aa3f53" Jan 27 08:03:34 crc kubenswrapper[4787]: I0127 08:03:34.964707 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" event={"ID":"053b65b3-b8e5-490f-81b8-63a5beea5f16","Type":"ContainerStarted","Data":"1d3eec5328e0d73621299a6eb958d1f5a82fbeb693a50277378e81014ee3e0f9"} Jan 27 08:03:34 crc kubenswrapper[4787]: I0127 08:03:34.965381 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:34 crc kubenswrapper[4787]: I0127 08:03:34.998617 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk"] Jan 27 08:03:34 crc kubenswrapper[4787]: I0127 08:03:34.998764 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.000458 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.013416 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb"] Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.014981 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.015498 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.021405 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.029890 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6956947bc8-fxxct"] Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.030256 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.030843 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.035846 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" podStartSLOduration=8.035799609 podStartE2EDuration="8.035799609s" podCreationTimestamp="2026-01-27 08:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:03:34.998200589 +0000 UTC m=+720.650556111" watchObservedRunningTime="2026-01-27 08:03:35.035799609 +0000 UTC m=+720.688155101" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.043521 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b8zkq"] Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.043707 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.044301 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.309181 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(4e9d42d231cc46ab2e837f99211814c4ddf95054b433ec4b38794a186fe5ac99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.309263 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(4e9d42d231cc46ab2e837f99211814c4ddf95054b433ec4b38794a186fe5ac99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.309298 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(4e9d42d231cc46ab2e837f99211814c4ddf95054b433ec4b38794a186fe5ac99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.309367 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-6956947bc8-fxxct_openshift-console(6b8a422e-b84a-431f-8634-ca1a64aec04e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-6956947bc8-fxxct_openshift-console(6b8a422e-b84a-431f-8634-ca1a64aec04e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(4e9d42d231cc46ab2e837f99211814c4ddf95054b433ec4b38794a186fe5ac99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-console/console-6956947bc8-fxxct" podUID="6b8a422e-b84a-431f-8634-ca1a64aec04e" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.336419 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(e67cc5005b0e82742465bf968446985c56062bdd46fa09adffcf6fd31db91121): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.336568 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(e67cc5005b0e82742465bf968446985c56062bdd46fa09adffcf6fd31db91121): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.336597 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(e67cc5005b0e82742465bf968446985c56062bdd46fa09adffcf6fd31db91121): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.336676 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-metrics-54757c584b-b8zkq_openshift-nmstate(86b956d1-8553-4624-8324-f0a65d627e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-metrics-54757c584b-b8zkq_openshift-nmstate(86b956d1-8553-4624-8324-f0a65d627e10)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(e67cc5005b0e82742465bf968446985c56062bdd46fa09adffcf6fd31db91121): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" podUID="86b956d1-8553-4624-8324-f0a65d627e10" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.349303 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(15b2adfc2f8aa26a36c69cb87b40f0e404742ea78d0e57bf01d3c49d4bed4a5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.349430 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(15b2adfc2f8aa26a36c69cb87b40f0e404742ea78d0e57bf01d3c49d4bed4a5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.349481 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(15b2adfc2f8aa26a36c69cb87b40f0e404742ea78d0e57bf01d3c49d4bed4a5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.349576 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate(29bc137f-2121-451f-9684-2b4209b38c32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate(29bc137f-2121-451f-9684-2b4209b38c32)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(15b2adfc2f8aa26a36c69cb87b40f0e404742ea78d0e57bf01d3c49d4bed4a5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" podUID="29bc137f-2121-451f-9684-2b4209b38c32" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.380928 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(e5093bba388cc695658dd09347e47b93aecf62edae4e389da0eddd793ffbec51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.381040 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(e5093bba388cc695658dd09347e47b93aecf62edae4e389da0eddd793ffbec51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.381076 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(e5093bba388cc695658dd09347e47b93aecf62edae4e389da0eddd793ffbec51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:35 crc kubenswrapper[4787]: E0127 08:03:35.381148 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate(51c3dc19-29a5-45ab-a384-fb7be3aa3f53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate(51c3dc19-29a5-45ab-a384-fb7be3aa3f53)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(e5093bba388cc695658dd09347e47b93aecf62edae4e389da0eddd793ffbec51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" podUID="51c3dc19-29a5-45ab-a384-fb7be3aa3f53" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.974414 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fjcwr" event={"ID":"0131f9ae-88a6-41c3-8c76-6b2676d5e87b","Type":"ContainerStarted","Data":"d3ce78724e5290d4782214e377ebca4244ecca841925ab607a8ced4ee9f7ebbd"} Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.974907 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.974930 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.974946 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:35 crc kubenswrapper[4787]: I0127 08:03:35.997055 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fjcwr" podStartSLOduration=2.057954689 podStartE2EDuration="4.997031241s" podCreationTimestamp="2026-01-27 08:03:31 +0000 UTC" firstStartedPulling="2026-01-27 08:03:32.350991603 +0000 UTC m=+718.003347095" lastFinishedPulling="2026-01-27 08:03:35.290068155 +0000 UTC m=+720.942423647" observedRunningTime="2026-01-27 08:03:35.993684726 +0000 UTC m=+721.646040268" watchObservedRunningTime="2026-01-27 08:03:35.997031241 +0000 UTC m=+721.649386733" Jan 27 08:03:36 crc kubenswrapper[4787]: I0127 08:03:36.024107 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:42 crc kubenswrapper[4787]: I0127 08:03:42.077905 4787 scope.go:117] "RemoveContainer" containerID="30eb607bd3c5a74648f4c24cbbf8118159296bb63fb30a76b991d8fdb94cb16a" Jan 27 08:03:42 crc kubenswrapper[4787]: E0127 08:03:42.078684 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rqjpz_openshift-multus(e6f78168-0b0d-464d-b1c7-00bb9a69c0d1)\"" pod="openshift-multus/multus-rqjpz" podUID="e6f78168-0b0d-464d-b1c7-00bb9a69c0d1" Jan 27 08:03:42 crc kubenswrapper[4787]: I0127 08:03:42.338188 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fjcwr" Jan 27 08:03:46 crc kubenswrapper[4787]: I0127 08:03:46.076117 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:46 crc kubenswrapper[4787]: I0127 08:03:46.077061 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:46 crc kubenswrapper[4787]: E0127 08:03:46.141947 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(02d37f1dee94b0bb77886bf62146be76036f653f4e4fc86d46341898423acb52): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:46 crc kubenswrapper[4787]: E0127 08:03:46.142076 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(02d37f1dee94b0bb77886bf62146be76036f653f4e4fc86d46341898423acb52): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:46 crc kubenswrapper[4787]: E0127 08:03:46.142117 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(02d37f1dee94b0bb77886bf62146be76036f653f4e4fc86d46341898423acb52): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:46 crc kubenswrapper[4787]: E0127 08:03:46.142200 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-6956947bc8-fxxct_openshift-console(6b8a422e-b84a-431f-8634-ca1a64aec04e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-6956947bc8-fxxct_openshift-console(6b8a422e-b84a-431f-8634-ca1a64aec04e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-6956947bc8-fxxct_openshift-console_6b8a422e-b84a-431f-8634-ca1a64aec04e_0(02d37f1dee94b0bb77886bf62146be76036f653f4e4fc86d46341898423acb52): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-console/console-6956947bc8-fxxct" podUID="6b8a422e-b84a-431f-8634-ca1a64aec04e" Jan 27 08:03:49 crc kubenswrapper[4787]: I0127 08:03:49.075954 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:49 crc kubenswrapper[4787]: I0127 08:03:49.076762 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:49 crc kubenswrapper[4787]: E0127 08:03:49.127340 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(9ea8beecff1acd76804bc8a934ae567d0534d456e387c14d372f849753bc3f25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:49 crc kubenswrapper[4787]: E0127 08:03:49.128024 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(9ea8beecff1acd76804bc8a934ae567d0534d456e387c14d372f849753bc3f25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:49 crc kubenswrapper[4787]: E0127 08:03:49.128079 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(9ea8beecff1acd76804bc8a934ae567d0534d456e387c14d372f849753bc3f25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:03:49 crc kubenswrapper[4787]: E0127 08:03:49.128189 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-metrics-54757c584b-b8zkq_openshift-nmstate(86b956d1-8553-4624-8324-f0a65d627e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-metrics-54757c584b-b8zkq_openshift-nmstate(86b956d1-8553-4624-8324-f0a65d627e10)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-b8zkq_openshift-nmstate_86b956d1-8553-4624-8324-f0a65d627e10_0(9ea8beecff1acd76804bc8a934ae567d0534d456e387c14d372f849753bc3f25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" podUID="86b956d1-8553-4624-8324-f0a65d627e10" Jan 27 08:03:50 crc kubenswrapper[4787]: I0127 08:03:50.075705 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:50 crc kubenswrapper[4787]: I0127 08:03:50.075793 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:50 crc kubenswrapper[4787]: I0127 08:03:50.076404 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:50 crc kubenswrapper[4787]: I0127 08:03:50.077188 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:50 crc kubenswrapper[4787]: E0127 08:03:50.148533 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(819a04c8f2c0cfc956ad67db26726f7c5d8cfb1d427ad1f5d10dee0733a5a37b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:50 crc kubenswrapper[4787]: E0127 08:03:50.148653 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(819a04c8f2c0cfc956ad67db26726f7c5d8cfb1d427ad1f5d10dee0733a5a37b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:50 crc kubenswrapper[4787]: E0127 08:03:50.148687 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(819a04c8f2c0cfc956ad67db26726f7c5d8cfb1d427ad1f5d10dee0733a5a37b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:03:50 crc kubenswrapper[4787]: E0127 08:03:50.148896 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate(51c3dc19-29a5-45ab-a384-fb7be3aa3f53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate(51c3dc19-29a5-45ab-a384-fb7be3aa3f53)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-7qxzb_openshift-nmstate_51c3dc19-29a5-45ab-a384-fb7be3aa3f53_0(819a04c8f2c0cfc956ad67db26726f7c5d8cfb1d427ad1f5d10dee0733a5a37b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" podUID="51c3dc19-29a5-45ab-a384-fb7be3aa3f53" Jan 27 08:03:50 crc kubenswrapper[4787]: E0127 08:03:50.160364 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(484ac11b2a4c44529a76b4c5b543561ce33ddeb29eff28cb730166903109d282): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:03:50 crc kubenswrapper[4787]: E0127 08:03:50.160421 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(484ac11b2a4c44529a76b4c5b543561ce33ddeb29eff28cb730166903109d282): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:50 crc kubenswrapper[4787]: E0127 08:03:50.160441 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(484ac11b2a4c44529a76b4c5b543561ce33ddeb29eff28cb730166903109d282): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:03:50 crc kubenswrapper[4787]: E0127 08:03:50.160492 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate(29bc137f-2121-451f-9684-2b4209b38c32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate(29bc137f-2121-451f-9684-2b4209b38c32)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-n6mzk_openshift-nmstate_29bc137f-2121-451f-9684-2b4209b38c32_0(484ac11b2a4c44529a76b4c5b543561ce33ddeb29eff28cb730166903109d282): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" podUID="29bc137f-2121-451f-9684-2b4209b38c32" Jan 27 08:03:53 crc kubenswrapper[4787]: I0127 08:03:53.077512 4787 scope.go:117] "RemoveContainer" containerID="30eb607bd3c5a74648f4c24cbbf8118159296bb63fb30a76b991d8fdb94cb16a" Jan 27 08:03:54 crc kubenswrapper[4787]: I0127 08:03:54.128614 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rqjpz_e6f78168-0b0d-464d-b1c7-00bb9a69c0d1/kube-multus/2.log" Jan 27 08:03:54 crc kubenswrapper[4787]: I0127 08:03:54.129041 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqjpz" event={"ID":"e6f78168-0b0d-464d-b1c7-00bb9a69c0d1","Type":"ContainerStarted","Data":"c8fbdabbb3a550f439f21f48b97e5b5a91b35ed06ed9be0e3cf6a154da8b69ba"} Jan 27 08:03:58 crc kubenswrapper[4787]: I0127 08:03:58.137472 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x2w6d" Jan 27 08:03:59 crc kubenswrapper[4787]: I0127 08:03:59.076166 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:03:59 crc kubenswrapper[4787]: I0127 08:03:59.077103 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:04:00 crc kubenswrapper[4787]: I0127 08:04:00.321844 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6956947bc8-fxxct"] Jan 27 08:04:01 crc kubenswrapper[4787]: I0127 08:04:01.303113 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6956947bc8-fxxct" event={"ID":"6b8a422e-b84a-431f-8634-ca1a64aec04e","Type":"ContainerStarted","Data":"6cdefda9f2996e1bf97bae5d69fbcc9b0b7d4b35443e21e38d5ed3d70f2cac69"} Jan 27 08:04:01 crc kubenswrapper[4787]: I0127 08:04:01.304091 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6956947bc8-fxxct" event={"ID":"6b8a422e-b84a-431f-8634-ca1a64aec04e","Type":"ContainerStarted","Data":"6a111a1aa74110e18f9e81fb8a5f192f42ffe5add94a1e5903bf971a7edc2435"} Jan 27 08:04:02 crc kubenswrapper[4787]: I0127 08:04:02.075895 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:04:02 crc kubenswrapper[4787]: I0127 08:04:02.076767 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" Jan 27 08:04:02 crc kubenswrapper[4787]: I0127 08:04:02.327573 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6956947bc8-fxxct" podStartSLOduration=30.327529021 podStartE2EDuration="30.327529021s" podCreationTimestamp="2026-01-27 08:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:04:01.327002525 +0000 UTC m=+746.979358027" watchObservedRunningTime="2026-01-27 08:04:02.327529021 +0000 UTC m=+747.979884513" Jan 27 08:04:02 crc kubenswrapper[4787]: I0127 08:04:02.331334 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b8zkq"] Jan 27 08:04:02 crc kubenswrapper[4787]: I0127 08:04:02.606187 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:04:02 crc kubenswrapper[4787]: I0127 08:04:02.606259 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:04:02 crc kubenswrapper[4787]: I0127 08:04:02.611947 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:04:03 crc kubenswrapper[4787]: I0127 08:04:03.077489 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:04:03 crc kubenswrapper[4787]: I0127 08:04:03.077945 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:04:03 crc kubenswrapper[4787]: I0127 08:04:03.322975 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" event={"ID":"86b956d1-8553-4624-8324-f0a65d627e10","Type":"ContainerStarted","Data":"ac5bbf0cf925cf4225cad32343c9aaeab211401a3f619ffec8542d57173393f2"} Jan 27 08:04:03 crc kubenswrapper[4787]: I0127 08:04:03.324009 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk"] Jan 27 08:04:03 crc kubenswrapper[4787]: I0127 08:04:03.327806 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6956947bc8-fxxct" Jan 27 08:04:03 crc kubenswrapper[4787]: W0127 08:04:03.332882 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29bc137f_2121_451f_9684_2b4209b38c32.slice/crio-af4ccb8cff216e28c419c10ba7aac19872eb26750ecec0d89fdf7281b8269fc8 WatchSource:0}: Error finding container af4ccb8cff216e28c419c10ba7aac19872eb26750ecec0d89fdf7281b8269fc8: Status 404 returned error can't find the container with id af4ccb8cff216e28c419c10ba7aac19872eb26750ecec0d89fdf7281b8269fc8 Jan 27 08:04:03 crc kubenswrapper[4787]: I0127 08:04:03.378607 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qptnb"] Jan 27 08:04:04 crc kubenswrapper[4787]: I0127 08:04:04.075637 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:04:04 crc kubenswrapper[4787]: I0127 08:04:04.077520 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" Jan 27 08:04:04 crc kubenswrapper[4787]: I0127 08:04:04.293977 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb"] Jan 27 08:04:04 crc kubenswrapper[4787]: W0127 08:04:04.304913 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c3dc19_29a5_45ab_a384_fb7be3aa3f53.slice/crio-1d40f2760ae6c959164d8ab633b3df8ae28ef9fefe607d011d02802a8cb5efea WatchSource:0}: Error finding container 1d40f2760ae6c959164d8ab633b3df8ae28ef9fefe607d011d02802a8cb5efea: Status 404 returned error can't find the container with id 1d40f2760ae6c959164d8ab633b3df8ae28ef9fefe607d011d02802a8cb5efea Jan 27 08:04:04 crc kubenswrapper[4787]: I0127 08:04:04.331499 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" event={"ID":"51c3dc19-29a5-45ab-a384-fb7be3aa3f53","Type":"ContainerStarted","Data":"1d40f2760ae6c959164d8ab633b3df8ae28ef9fefe607d011d02802a8cb5efea"} Jan 27 08:04:04 crc kubenswrapper[4787]: I0127 08:04:04.332885 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" event={"ID":"29bc137f-2121-451f-9684-2b4209b38c32","Type":"ContainerStarted","Data":"af4ccb8cff216e28c419c10ba7aac19872eb26750ecec0d89fdf7281b8269fc8"} Jan 27 08:04:04 crc kubenswrapper[4787]: I0127 08:04:04.334922 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" event={"ID":"86b956d1-8553-4624-8324-f0a65d627e10","Type":"ContainerStarted","Data":"dbb9464c9640fd090519fda3a92c9a6f10a5d43f0e361834dd925bccc4f33927"} Jan 27 08:04:05 crc kubenswrapper[4787]: I0127 08:04:05.342908 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" event={"ID":"29bc137f-2121-451f-9684-2b4209b38c32","Type":"ContainerStarted","Data":"164c7cd54f21400960f6a02044f0385e9d033764f215d8f184c4692c13f29da2"} Jan 27 08:04:05 crc kubenswrapper[4787]: I0127 08:04:05.371117 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" podStartSLOduration=33.170817778 podStartE2EDuration="34.371076799s" podCreationTimestamp="2026-01-27 08:03:31 +0000 UTC" firstStartedPulling="2026-01-27 08:04:03.338586425 +0000 UTC m=+748.990941917" lastFinishedPulling="2026-01-27 08:04:04.538845446 +0000 UTC m=+750.191200938" observedRunningTime="2026-01-27 08:04:05.358134288 +0000 UTC m=+751.010489780" watchObservedRunningTime="2026-01-27 08:04:05.371076799 +0000 UTC m=+751.023432301" Jan 27 08:04:06 crc kubenswrapper[4787]: I0127 08:04:06.354475 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" event={"ID":"86b956d1-8553-4624-8324-f0a65d627e10","Type":"ContainerStarted","Data":"c46607a7d58e3bb0d3af713979259ea1e17fb369bdb0f75fdec56f5a3ff60ba7"} Jan 27 08:04:06 crc kubenswrapper[4787]: I0127 08:04:06.354958 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:04:06 crc kubenswrapper[4787]: I0127 08:04:06.382340 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-b8zkq" podStartSLOduration=32.155857214 podStartE2EDuration="35.382314637s" podCreationTimestamp="2026-01-27 08:03:31 +0000 UTC" firstStartedPulling="2026-01-27 08:04:02.339542518 +0000 UTC m=+747.991898010" lastFinishedPulling="2026-01-27 08:04:05.565999921 +0000 UTC m=+751.218355433" observedRunningTime="2026-01-27 08:04:06.380870761 +0000 UTC m=+752.033226263" watchObservedRunningTime="2026-01-27 08:04:06.382314637 +0000 UTC m=+752.034670129" Jan 27 08:04:07 crc kubenswrapper[4787]: I0127 08:04:07.370804 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" event={"ID":"51c3dc19-29a5-45ab-a384-fb7be3aa3f53","Type":"ContainerStarted","Data":"eda7cfc2e8ce29978f4ac8c6407bb454ba0d0d9022c89054043141ec4a20fe19"} Jan 27 08:04:07 crc kubenswrapper[4787]: I0127 08:04:07.395791 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qxzb" podStartSLOduration=33.170415409 podStartE2EDuration="35.395767203s" podCreationTimestamp="2026-01-27 08:03:32 +0000 UTC" firstStartedPulling="2026-01-27 08:04:04.307984956 +0000 UTC m=+749.960340448" lastFinishedPulling="2026-01-27 08:04:06.53333676 +0000 UTC m=+752.185692242" observedRunningTime="2026-01-27 08:04:07.391884044 +0000 UTC m=+753.044239556" watchObservedRunningTime="2026-01-27 08:04:07.395767203 +0000 UTC m=+753.048122695" Jan 27 08:04:10 crc kubenswrapper[4787]: I0127 08:04:10.822479 4787 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.460636 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5pnjd"] Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.462073 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.476943 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pnjd"] Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.560865 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-utilities\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.560915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mv8p\" (UniqueName: \"kubernetes.io/projected/2969bccf-3d32-4a37-9291-988f4b2b74cb-kube-api-access-4mv8p\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.560974 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-catalog-content\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.662070 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-utilities\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.662151 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mv8p\" (UniqueName: \"kubernetes.io/projected/2969bccf-3d32-4a37-9291-988f4b2b74cb-kube-api-access-4mv8p\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.662173 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-catalog-content\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.662622 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-utilities\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.662691 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-catalog-content\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.695991 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mv8p\" (UniqueName: \"kubernetes.io/projected/2969bccf-3d32-4a37-9291-988f4b2b74cb-kube-api-access-4mv8p\") pod \"redhat-operators-5pnjd\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:12 crc kubenswrapper[4787]: I0127 08:04:12.794342 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:13 crc kubenswrapper[4787]: I0127 08:04:13.033035 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pnjd"] Jan 27 08:04:13 crc kubenswrapper[4787]: I0127 08:04:13.422837 4787 generic.go:334] "Generic (PLEG): container finished" podID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerID="4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6" exitCode=0 Jan 27 08:04:13 crc kubenswrapper[4787]: I0127 08:04:13.422907 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pnjd" event={"ID":"2969bccf-3d32-4a37-9291-988f4b2b74cb","Type":"ContainerDied","Data":"4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6"} Jan 27 08:04:13 crc kubenswrapper[4787]: I0127 08:04:13.422976 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pnjd" event={"ID":"2969bccf-3d32-4a37-9291-988f4b2b74cb","Type":"ContainerStarted","Data":"c50821b4746329a2ae295587c3d77d0a2c81cc6611e1f0c92d6a210e102ae24a"} Jan 27 08:04:14 crc kubenswrapper[4787]: I0127 08:04:14.456709 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pnjd" event={"ID":"2969bccf-3d32-4a37-9291-988f4b2b74cb","Type":"ContainerStarted","Data":"149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1"} Jan 27 08:04:14 crc kubenswrapper[4787]: I0127 08:04:14.990051 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6m9kb"] Jan 27 08:04:14 crc kubenswrapper[4787]: I0127 08:04:14.992056 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.001969 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-utilities\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.002056 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbf2\" (UniqueName: \"kubernetes.io/projected/88af0f5f-fa97-42c1-b7d6-311625465692-kube-api-access-4dbf2\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.002117 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-catalog-content\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.008761 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m9kb"] Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.102900 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-catalog-content\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.103009 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-utilities\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.103067 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dbf2\" (UniqueName: \"kubernetes.io/projected/88af0f5f-fa97-42c1-b7d6-311625465692-kube-api-access-4dbf2\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.103472 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-catalog-content\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.104574 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-utilities\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.131851 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dbf2\" (UniqueName: \"kubernetes.io/projected/88af0f5f-fa97-42c1-b7d6-311625465692-kube-api-access-4dbf2\") pod \"redhat-marketplace-6m9kb\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.310064 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.485995 4787 generic.go:334] "Generic (PLEG): container finished" podID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerID="149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1" exitCode=0 Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.486200 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pnjd" event={"ID":"2969bccf-3d32-4a37-9291-988f4b2b74cb","Type":"ContainerDied","Data":"149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1"} Jan 27 08:04:15 crc kubenswrapper[4787]: I0127 08:04:15.548296 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m9kb"] Jan 27 08:04:15 crc kubenswrapper[4787]: W0127 08:04:15.554257 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88af0f5f_fa97_42c1_b7d6_311625465692.slice/crio-bf8cbaf9a924d0b96269779abe10b8a7dfa5aa17bb374866581cd219b082df2a WatchSource:0}: Error finding container bf8cbaf9a924d0b96269779abe10b8a7dfa5aa17bb374866581cd219b082df2a: Status 404 returned error can't find the container with id bf8cbaf9a924d0b96269779abe10b8a7dfa5aa17bb374866581cd219b082df2a Jan 27 08:04:16 crc kubenswrapper[4787]: I0127 08:04:16.497846 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pnjd" event={"ID":"2969bccf-3d32-4a37-9291-988f4b2b74cb","Type":"ContainerStarted","Data":"af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010"} Jan 27 08:04:16 crc kubenswrapper[4787]: I0127 08:04:16.499121 4787 generic.go:334] "Generic (PLEG): container finished" podID="88af0f5f-fa97-42c1-b7d6-311625465692" containerID="8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026" exitCode=0 Jan 27 08:04:16 crc kubenswrapper[4787]: I0127 08:04:16.499148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m9kb" event={"ID":"88af0f5f-fa97-42c1-b7d6-311625465692","Type":"ContainerDied","Data":"8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026"} Jan 27 08:04:16 crc kubenswrapper[4787]: I0127 08:04:16.499165 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m9kb" event={"ID":"88af0f5f-fa97-42c1-b7d6-311625465692","Type":"ContainerStarted","Data":"bf8cbaf9a924d0b96269779abe10b8a7dfa5aa17bb374866581cd219b082df2a"} Jan 27 08:04:16 crc kubenswrapper[4787]: I0127 08:04:16.520780 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5pnjd" podStartSLOduration=1.956624106 podStartE2EDuration="4.520754922s" podCreationTimestamp="2026-01-27 08:04:12 +0000 UTC" firstStartedPulling="2026-01-27 08:04:13.424468489 +0000 UTC m=+759.076823981" lastFinishedPulling="2026-01-27 08:04:15.988599315 +0000 UTC m=+761.640954797" observedRunningTime="2026-01-27 08:04:16.51871691 +0000 UTC m=+762.171072502" watchObservedRunningTime="2026-01-27 08:04:16.520754922 +0000 UTC m=+762.173110414" Jan 27 08:04:18 crc kubenswrapper[4787]: I0127 08:04:18.515451 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m9kb" event={"ID":"88af0f5f-fa97-42c1-b7d6-311625465692","Type":"ContainerStarted","Data":"b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297"} Jan 27 08:04:19 crc kubenswrapper[4787]: I0127 08:04:19.529514 4787 generic.go:334] "Generic (PLEG): container finished" podID="88af0f5f-fa97-42c1-b7d6-311625465692" containerID="b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297" exitCode=0 Jan 27 08:04:19 crc kubenswrapper[4787]: I0127 08:04:19.529614 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m9kb" event={"ID":"88af0f5f-fa97-42c1-b7d6-311625465692","Type":"ContainerDied","Data":"b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297"} Jan 27 08:04:21 crc kubenswrapper[4787]: I0127 08:04:21.546621 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m9kb" event={"ID":"88af0f5f-fa97-42c1-b7d6-311625465692","Type":"ContainerStarted","Data":"8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915"} Jan 27 08:04:22 crc kubenswrapper[4787]: I0127 08:04:22.306489 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-n6mzk" Jan 27 08:04:22 crc kubenswrapper[4787]: I0127 08:04:22.572518 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6m9kb" podStartSLOduration=4.769634836 podStartE2EDuration="8.572491905s" podCreationTimestamp="2026-01-27 08:04:14 +0000 UTC" firstStartedPulling="2026-01-27 08:04:16.503213544 +0000 UTC m=+762.155569036" lastFinishedPulling="2026-01-27 08:04:20.306070583 +0000 UTC m=+765.958426105" observedRunningTime="2026-01-27 08:04:22.571608483 +0000 UTC m=+768.223963975" watchObservedRunningTime="2026-01-27 08:04:22.572491905 +0000 UTC m=+768.224847397" Jan 27 08:04:22 crc kubenswrapper[4787]: I0127 08:04:22.794711 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:22 crc kubenswrapper[4787]: I0127 08:04:22.795109 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:22 crc kubenswrapper[4787]: I0127 08:04:22.837744 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:23 crc kubenswrapper[4787]: I0127 08:04:23.604073 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:23 crc kubenswrapper[4787]: I0127 08:04:23.979585 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pnjd"] Jan 27 08:04:25 crc kubenswrapper[4787]: I0127 08:04:25.310936 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:25 crc kubenswrapper[4787]: I0127 08:04:25.311013 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:25 crc kubenswrapper[4787]: I0127 08:04:25.361059 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:25 crc kubenswrapper[4787]: I0127 08:04:25.571410 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5pnjd" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerName="registry-server" containerID="cri-o://af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010" gracePeriod=2 Jan 27 08:04:25 crc kubenswrapper[4787]: I0127 08:04:25.615508 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:25 crc kubenswrapper[4787]: I0127 08:04:25.933470 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.002690 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mv8p\" (UniqueName: \"kubernetes.io/projected/2969bccf-3d32-4a37-9291-988f4b2b74cb-kube-api-access-4mv8p\") pod \"2969bccf-3d32-4a37-9291-988f4b2b74cb\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.002849 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-catalog-content\") pod \"2969bccf-3d32-4a37-9291-988f4b2b74cb\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.002881 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-utilities\") pod \"2969bccf-3d32-4a37-9291-988f4b2b74cb\" (UID: \"2969bccf-3d32-4a37-9291-988f4b2b74cb\") " Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.004572 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-utilities" (OuterVolumeSpecName: "utilities") pod "2969bccf-3d32-4a37-9291-988f4b2b74cb" (UID: "2969bccf-3d32-4a37-9291-988f4b2b74cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.010700 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2969bccf-3d32-4a37-9291-988f4b2b74cb-kube-api-access-4mv8p" (OuterVolumeSpecName: "kube-api-access-4mv8p") pod "2969bccf-3d32-4a37-9291-988f4b2b74cb" (UID: "2969bccf-3d32-4a37-9291-988f4b2b74cb"). InnerVolumeSpecName "kube-api-access-4mv8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.105219 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mv8p\" (UniqueName: \"kubernetes.io/projected/2969bccf-3d32-4a37-9291-988f4b2b74cb-kube-api-access-4mv8p\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.105260 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.122123 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2969bccf-3d32-4a37-9291-988f4b2b74cb" (UID: "2969bccf-3d32-4a37-9291-988f4b2b74cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.206723 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2969bccf-3d32-4a37-9291-988f4b2b74cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.580938 4787 generic.go:334] "Generic (PLEG): container finished" podID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerID="af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010" exitCode=0 Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.581003 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pnjd" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.580996 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pnjd" event={"ID":"2969bccf-3d32-4a37-9291-988f4b2b74cb","Type":"ContainerDied","Data":"af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010"} Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.581429 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pnjd" event={"ID":"2969bccf-3d32-4a37-9291-988f4b2b74cb","Type":"ContainerDied","Data":"c50821b4746329a2ae295587c3d77d0a2c81cc6611e1f0c92d6a210e102ae24a"} Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.581456 4787 scope.go:117] "RemoveContainer" containerID="af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.599771 4787 scope.go:117] "RemoveContainer" containerID="149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.625037 4787 scope.go:117] "RemoveContainer" containerID="4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.626331 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pnjd"] Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.630726 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5pnjd"] Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.658022 4787 scope.go:117] "RemoveContainer" containerID="af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010" Jan 27 08:04:26 crc kubenswrapper[4787]: E0127 08:04:26.658795 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010\": container with ID starting with af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010 not found: ID does not exist" containerID="af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.658863 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010"} err="failed to get container status \"af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010\": rpc error: code = NotFound desc = could not find container \"af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010\": container with ID starting with af8e7f65b7eb782a5886e82e9e39b8338e065742d4623e1b46bc0facd957f010 not found: ID does not exist" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.658901 4787 scope.go:117] "RemoveContainer" containerID="149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1" Jan 27 08:04:26 crc kubenswrapper[4787]: E0127 08:04:26.659621 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1\": container with ID starting with 149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1 not found: ID does not exist" containerID="149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.659690 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1"} err="failed to get container status \"149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1\": rpc error: code = NotFound desc = could not find container \"149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1\": container with ID starting with 149f3b4ba2c4008c1dca690720d1a958e7656cb788706e58489a818c5c4e6df1 not found: ID does not exist" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.659742 4787 scope.go:117] "RemoveContainer" containerID="4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6" Jan 27 08:04:26 crc kubenswrapper[4787]: E0127 08:04:26.660173 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6\": container with ID starting with 4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6 not found: ID does not exist" containerID="4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6" Jan 27 08:04:26 crc kubenswrapper[4787]: I0127 08:04:26.660211 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6"} err="failed to get container status \"4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6\": rpc error: code = NotFound desc = could not find container \"4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6\": container with ID starting with 4d39e1ede4dbd1fbd2d05e09315c30fc85d98547d0055669a526537497a6dfc6 not found: ID does not exist" Jan 27 08:04:27 crc kubenswrapper[4787]: I0127 08:04:27.083507 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" path="/var/lib/kubelet/pods/2969bccf-3d32-4a37-9291-988f4b2b74cb/volumes" Jan 27 08:04:27 crc kubenswrapper[4787]: I0127 08:04:27.777705 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m9kb"] Jan 27 08:04:27 crc kubenswrapper[4787]: I0127 08:04:27.778745 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6m9kb" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" containerName="registry-server" containerID="cri-o://8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915" gracePeriod=2 Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.122348 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.232324 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-utilities\") pod \"88af0f5f-fa97-42c1-b7d6-311625465692\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.232393 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-catalog-content\") pod \"88af0f5f-fa97-42c1-b7d6-311625465692\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.232498 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dbf2\" (UniqueName: \"kubernetes.io/projected/88af0f5f-fa97-42c1-b7d6-311625465692-kube-api-access-4dbf2\") pod \"88af0f5f-fa97-42c1-b7d6-311625465692\" (UID: \"88af0f5f-fa97-42c1-b7d6-311625465692\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.234082 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-utilities" (OuterVolumeSpecName: "utilities") pod "88af0f5f-fa97-42c1-b7d6-311625465692" (UID: "88af0f5f-fa97-42c1-b7d6-311625465692"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.239747 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88af0f5f-fa97-42c1-b7d6-311625465692-kube-api-access-4dbf2" (OuterVolumeSpecName: "kube-api-access-4dbf2") pod "88af0f5f-fa97-42c1-b7d6-311625465692" (UID: "88af0f5f-fa97-42c1-b7d6-311625465692"). InnerVolumeSpecName "kube-api-access-4dbf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.268992 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88af0f5f-fa97-42c1-b7d6-311625465692" (UID: "88af0f5f-fa97-42c1-b7d6-311625465692"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.334471 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dbf2\" (UniqueName: \"kubernetes.io/projected/88af0f5f-fa97-42c1-b7d6-311625465692-kube-api-access-4dbf2\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.334513 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.334526 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88af0f5f-fa97-42c1-b7d6-311625465692-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.426210 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qptnb" podUID="b070600e-8a6f-4bb9-a1c2-e763f55d90eb" containerName="console" containerID="cri-o://310ccc96bfc18eaae7b0abfc190b26a49989ec96f4c419c7eab3252a1004a577" gracePeriod=15 Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.606416 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qptnb_b070600e-8a6f-4bb9-a1c2-e763f55d90eb/console/0.log" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.606482 4787 generic.go:334] "Generic (PLEG): container finished" podID="b070600e-8a6f-4bb9-a1c2-e763f55d90eb" containerID="310ccc96bfc18eaae7b0abfc190b26a49989ec96f4c419c7eab3252a1004a577" exitCode=2 Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.606574 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qptnb" event={"ID":"b070600e-8a6f-4bb9-a1c2-e763f55d90eb","Type":"ContainerDied","Data":"310ccc96bfc18eaae7b0abfc190b26a49989ec96f4c419c7eab3252a1004a577"} Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.611044 4787 generic.go:334] "Generic (PLEG): container finished" podID="88af0f5f-fa97-42c1-b7d6-311625465692" containerID="8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915" exitCode=0 Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.611100 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m9kb" event={"ID":"88af0f5f-fa97-42c1-b7d6-311625465692","Type":"ContainerDied","Data":"8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915"} Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.611134 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m9kb" event={"ID":"88af0f5f-fa97-42c1-b7d6-311625465692","Type":"ContainerDied","Data":"bf8cbaf9a924d0b96269779abe10b8a7dfa5aa17bb374866581cd219b082df2a"} Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.611150 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m9kb" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.611156 4787 scope.go:117] "RemoveContainer" containerID="8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.641190 4787 scope.go:117] "RemoveContainer" containerID="b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.651682 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m9kb"] Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.658204 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m9kb"] Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.672458 4787 scope.go:117] "RemoveContainer" containerID="8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.705967 4787 scope.go:117] "RemoveContainer" containerID="8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915" Jan 27 08:04:28 crc kubenswrapper[4787]: E0127 08:04:28.706813 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915\": container with ID starting with 8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915 not found: ID does not exist" containerID="8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.706846 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915"} err="failed to get container status \"8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915\": rpc error: code = NotFound desc = could not find container \"8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915\": container with ID starting with 8751cd3e0dd17c3ecaafc8de052509e4464539e2c0ebee461e47fa1d86da4915 not found: ID does not exist" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.706891 4787 scope.go:117] "RemoveContainer" containerID="b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297" Jan 27 08:04:28 crc kubenswrapper[4787]: E0127 08:04:28.707370 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297\": container with ID starting with b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297 not found: ID does not exist" containerID="b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.707386 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297"} err="failed to get container status \"b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297\": rpc error: code = NotFound desc = could not find container \"b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297\": container with ID starting with b6084011bc3d9d9cb92c4dfc67e822a60c1ce1099b82905c842288c4c0c33297 not found: ID does not exist" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.707406 4787 scope.go:117] "RemoveContainer" containerID="8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026" Jan 27 08:04:28 crc kubenswrapper[4787]: E0127 08:04:28.707889 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026\": container with ID starting with 8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026 not found: ID does not exist" containerID="8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.707948 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026"} err="failed to get container status \"8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026\": rpc error: code = NotFound desc = could not find container \"8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026\": container with ID starting with 8a9cab427f031b8a46a4b4b768dd56fa1a47b6601261ca5d3de39774159a4026 not found: ID does not exist" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.795600 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qptnb_b070600e-8a6f-4bb9-a1c2-e763f55d90eb/console/0.log" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.795703 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qptnb" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.845891 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-oauth-serving-cert\") pod \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.845967 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-oauth-config\") pod \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.846006 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-trusted-ca-bundle\") pod \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.846057 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cbr9\" (UniqueName: \"kubernetes.io/projected/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-kube-api-access-9cbr9\") pod \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.847296 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b070600e-8a6f-4bb9-a1c2-e763f55d90eb" (UID: "b070600e-8a6f-4bb9-a1c2-e763f55d90eb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.847498 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b070600e-8a6f-4bb9-a1c2-e763f55d90eb" (UID: "b070600e-8a6f-4bb9-a1c2-e763f55d90eb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.847586 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-config\") pod \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.847678 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-serving-cert\") pod \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.848141 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-config" (OuterVolumeSpecName: "console-config") pod "b070600e-8a6f-4bb9-a1c2-e763f55d90eb" (UID: "b070600e-8a6f-4bb9-a1c2-e763f55d90eb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.848176 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-service-ca\") pod \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\" (UID: \"b070600e-8a6f-4bb9-a1c2-e763f55d90eb\") " Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.848480 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-service-ca" (OuterVolumeSpecName: "service-ca") pod "b070600e-8a6f-4bb9-a1c2-e763f55d90eb" (UID: "b070600e-8a6f-4bb9-a1c2-e763f55d90eb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.848638 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.848659 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.848669 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.848681 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.853190 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b070600e-8a6f-4bb9-a1c2-e763f55d90eb" (UID: "b070600e-8a6f-4bb9-a1c2-e763f55d90eb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.853543 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-kube-api-access-9cbr9" (OuterVolumeSpecName: "kube-api-access-9cbr9") pod "b070600e-8a6f-4bb9-a1c2-e763f55d90eb" (UID: "b070600e-8a6f-4bb9-a1c2-e763f55d90eb"). InnerVolumeSpecName "kube-api-access-9cbr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.854051 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b070600e-8a6f-4bb9-a1c2-e763f55d90eb" (UID: "b070600e-8a6f-4bb9-a1c2-e763f55d90eb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.950074 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.950456 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cbr9\" (UniqueName: \"kubernetes.io/projected/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-kube-api-access-9cbr9\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:28 crc kubenswrapper[4787]: I0127 08:04:28.950472 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b070600e-8a6f-4bb9-a1c2-e763f55d90eb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:29 crc kubenswrapper[4787]: I0127 08:04:29.089268 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" path="/var/lib/kubelet/pods/88af0f5f-fa97-42c1-b7d6-311625465692/volumes" Jan 27 08:04:29 crc kubenswrapper[4787]: I0127 08:04:29.624428 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qptnb_b070600e-8a6f-4bb9-a1c2-e763f55d90eb/console/0.log" Jan 27 08:04:29 crc kubenswrapper[4787]: I0127 08:04:29.625117 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qptnb" Jan 27 08:04:29 crc kubenswrapper[4787]: I0127 08:04:29.625814 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qptnb" event={"ID":"b070600e-8a6f-4bb9-a1c2-e763f55d90eb","Type":"ContainerDied","Data":"29d34fb56a5ea9d4aa885d1db94f1f656d15f31f013518446e7a295dc5daddbd"} Jan 27 08:04:29 crc kubenswrapper[4787]: I0127 08:04:29.625893 4787 scope.go:117] "RemoveContainer" containerID="310ccc96bfc18eaae7b0abfc190b26a49989ec96f4c419c7eab3252a1004a577" Jan 27 08:04:29 crc kubenswrapper[4787]: I0127 08:04:29.655225 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qptnb"] Jan 27 08:04:29 crc kubenswrapper[4787]: I0127 08:04:29.660467 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qptnb"] Jan 27 08:04:31 crc kubenswrapper[4787]: I0127 08:04:31.085602 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b070600e-8a6f-4bb9-a1c2-e763f55d90eb" path="/var/lib/kubelet/pods/b070600e-8a6f-4bb9-a1c2-e763f55d90eb/volumes" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.232625 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77"] Jan 27 08:04:35 crc kubenswrapper[4787]: E0127 08:04:35.233543 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerName="extract-content" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233668 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerName="extract-content" Jan 27 08:04:35 crc kubenswrapper[4787]: E0127 08:04:35.233692 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" containerName="extract-content" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233701 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" containerName="extract-content" Jan 27 08:04:35 crc kubenswrapper[4787]: E0127 08:04:35.233725 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" containerName="registry-server" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233734 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" containerName="registry-server" Jan 27 08:04:35 crc kubenswrapper[4787]: E0127 08:04:35.233747 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerName="extract-utilities" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233756 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerName="extract-utilities" Jan 27 08:04:35 crc kubenswrapper[4787]: E0127 08:04:35.233769 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" containerName="extract-utilities" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233776 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" containerName="extract-utilities" Jan 27 08:04:35 crc kubenswrapper[4787]: E0127 08:04:35.233792 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b070600e-8a6f-4bb9-a1c2-e763f55d90eb" containerName="console" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233799 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b070600e-8a6f-4bb9-a1c2-e763f55d90eb" containerName="console" Jan 27 08:04:35 crc kubenswrapper[4787]: E0127 08:04:35.233812 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerName="registry-server" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233819 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerName="registry-server" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233960 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="88af0f5f-fa97-42c1-b7d6-311625465692" containerName="registry-server" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233979 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b070600e-8a6f-4bb9-a1c2-e763f55d90eb" containerName="console" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.233989 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2969bccf-3d32-4a37-9291-988f4b2b74cb" containerName="registry-server" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.234988 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.239163 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.243819 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77"] Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.366527 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.366614 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.366660 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c7j7\" (UniqueName: \"kubernetes.io/projected/db32e829-ff6c-4e31-bbc4-5291eb8127d3-kube-api-access-2c7j7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.467597 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.467708 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c7j7\" (UniqueName: \"kubernetes.io/projected/db32e829-ff6c-4e31-bbc4-5291eb8127d3-kube-api-access-2c7j7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.467774 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.468697 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.469671 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.497437 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c7j7\" (UniqueName: \"kubernetes.io/projected/db32e829-ff6c-4e31-bbc4-5291eb8127d3-kube-api-access-2c7j7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.560131 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:35 crc kubenswrapper[4787]: I0127 08:04:35.846362 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77"] Jan 27 08:04:36 crc kubenswrapper[4787]: I0127 08:04:36.683196 4787 generic.go:334] "Generic (PLEG): container finished" podID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerID="d99ae58a358703ed015efa605b660c711aca9e1dfb36edd9c93656d0cc45fe0e" exitCode=0 Jan 27 08:04:36 crc kubenswrapper[4787]: I0127 08:04:36.683261 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" event={"ID":"db32e829-ff6c-4e31-bbc4-5291eb8127d3","Type":"ContainerDied","Data":"d99ae58a358703ed015efa605b660c711aca9e1dfb36edd9c93656d0cc45fe0e"} Jan 27 08:04:36 crc kubenswrapper[4787]: I0127 08:04:36.683307 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" event={"ID":"db32e829-ff6c-4e31-bbc4-5291eb8127d3","Type":"ContainerStarted","Data":"5b8c02e6b42a52df223a377cafaeace92053f80eac27df2ea6cc17d8538627d4"} Jan 27 08:04:38 crc kubenswrapper[4787]: I0127 08:04:38.696651 4787 generic.go:334] "Generic (PLEG): container finished" podID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerID="50ef9eb656d7796d5878d5a68c71d99d16c113ebd6e6510949948c6ef6f8ee11" exitCode=0 Jan 27 08:04:38 crc kubenswrapper[4787]: I0127 08:04:38.696756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" event={"ID":"db32e829-ff6c-4e31-bbc4-5291eb8127d3","Type":"ContainerDied","Data":"50ef9eb656d7796d5878d5a68c71d99d16c113ebd6e6510949948c6ef6f8ee11"} Jan 27 08:04:39 crc kubenswrapper[4787]: I0127 08:04:39.714599 4787 generic.go:334] "Generic (PLEG): container finished" podID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerID="cbaf0da5be2dfab10bd6788dadf84fb3502ff97babbcd19d4eb4b76d243cb438" exitCode=0 Jan 27 08:04:39 crc kubenswrapper[4787]: I0127 08:04:39.714686 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" event={"ID":"db32e829-ff6c-4e31-bbc4-5291eb8127d3","Type":"ContainerDied","Data":"cbaf0da5be2dfab10bd6788dadf84fb3502ff97babbcd19d4eb4b76d243cb438"} Jan 27 08:04:40 crc kubenswrapper[4787]: I0127 08:04:40.974534 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.053384 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-util\") pod \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.053489 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-bundle\") pod \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.053607 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c7j7\" (UniqueName: \"kubernetes.io/projected/db32e829-ff6c-4e31-bbc4-5291eb8127d3-kube-api-access-2c7j7\") pod \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\" (UID: \"db32e829-ff6c-4e31-bbc4-5291eb8127d3\") " Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.054682 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-bundle" (OuterVolumeSpecName: "bundle") pod "db32e829-ff6c-4e31-bbc4-5291eb8127d3" (UID: "db32e829-ff6c-4e31-bbc4-5291eb8127d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.075186 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-util" (OuterVolumeSpecName: "util") pod "db32e829-ff6c-4e31-bbc4-5291eb8127d3" (UID: "db32e829-ff6c-4e31-bbc4-5291eb8127d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.088810 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db32e829-ff6c-4e31-bbc4-5291eb8127d3-kube-api-access-2c7j7" (OuterVolumeSpecName: "kube-api-access-2c7j7") pod "db32e829-ff6c-4e31-bbc4-5291eb8127d3" (UID: "db32e829-ff6c-4e31-bbc4-5291eb8127d3"). InnerVolumeSpecName "kube-api-access-2c7j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.155378 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.155420 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c7j7\" (UniqueName: \"kubernetes.io/projected/db32e829-ff6c-4e31-bbc4-5291eb8127d3-kube-api-access-2c7j7\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.155431 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db32e829-ff6c-4e31-bbc4-5291eb8127d3-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.732758 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" event={"ID":"db32e829-ff6c-4e31-bbc4-5291eb8127d3","Type":"ContainerDied","Data":"5b8c02e6b42a52df223a377cafaeace92053f80eac27df2ea6cc17d8538627d4"} Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.732825 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8c02e6b42a52df223a377cafaeace92053f80eac27df2ea6cc17d8538627d4" Jan 27 08:04:41 crc kubenswrapper[4787]: I0127 08:04:41.733545 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.604970 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk"] Jan 27 08:04:50 crc kubenswrapper[4787]: E0127 08:04:50.606320 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerName="pull" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.606343 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerName="pull" Jan 27 08:04:50 crc kubenswrapper[4787]: E0127 08:04:50.606363 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerName="util" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.606374 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerName="util" Jan 27 08:04:50 crc kubenswrapper[4787]: E0127 08:04:50.606389 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerName="extract" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.606403 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerName="extract" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.606587 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="db32e829-ff6c-4e31-bbc4-5291eb8127d3" containerName="extract" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.607311 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.609974 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.611420 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.611928 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.612047 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-f8lbz" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.612525 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.687690 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk"] Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.689291 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d5bf055-d90c-451f-aaf7-19140fcea291-apiservice-cert\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.689506 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d5bf055-d90c-451f-aaf7-19140fcea291-webhook-cert\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.689596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzsrh\" (UniqueName: \"kubernetes.io/projected/6d5bf055-d90c-451f-aaf7-19140fcea291-kube-api-access-jzsrh\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.791319 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d5bf055-d90c-451f-aaf7-19140fcea291-webhook-cert\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.791404 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzsrh\" (UniqueName: \"kubernetes.io/projected/6d5bf055-d90c-451f-aaf7-19140fcea291-kube-api-access-jzsrh\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.791449 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d5bf055-d90c-451f-aaf7-19140fcea291-apiservice-cert\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.800639 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d5bf055-d90c-451f-aaf7-19140fcea291-apiservice-cert\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.800730 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d5bf055-d90c-451f-aaf7-19140fcea291-webhook-cert\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.816826 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzsrh\" (UniqueName: \"kubernetes.io/projected/6d5bf055-d90c-451f-aaf7-19140fcea291-kube-api-access-jzsrh\") pod \"metallb-operator-controller-manager-555548cdf7-hwkjk\" (UID: \"6d5bf055-d90c-451f-aaf7-19140fcea291\") " pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.842442 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f"] Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.843206 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.845538 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.845636 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.850439 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-szgvv" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.874311 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f"] Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.893124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bgm4\" (UniqueName: \"kubernetes.io/projected/a5c49848-4aac-4efa-8acf-7edcee5c2093-kube-api-access-5bgm4\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.893196 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5c49848-4aac-4efa-8acf-7edcee5c2093-apiservice-cert\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.893274 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5c49848-4aac-4efa-8acf-7edcee5c2093-webhook-cert\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.927863 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.994610 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bgm4\" (UniqueName: \"kubernetes.io/projected/a5c49848-4aac-4efa-8acf-7edcee5c2093-kube-api-access-5bgm4\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.994673 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5c49848-4aac-4efa-8acf-7edcee5c2093-apiservice-cert\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:50 crc kubenswrapper[4787]: I0127 08:04:50.994746 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5c49848-4aac-4efa-8acf-7edcee5c2093-webhook-cert\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:51 crc kubenswrapper[4787]: I0127 08:04:51.001224 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5c49848-4aac-4efa-8acf-7edcee5c2093-webhook-cert\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:51 crc kubenswrapper[4787]: I0127 08:04:51.002749 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5c49848-4aac-4efa-8acf-7edcee5c2093-apiservice-cert\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:51 crc kubenswrapper[4787]: I0127 08:04:51.014541 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bgm4\" (UniqueName: \"kubernetes.io/projected/a5c49848-4aac-4efa-8acf-7edcee5c2093-kube-api-access-5bgm4\") pod \"metallb-operator-webhook-server-78544bb5fb-sxw5f\" (UID: \"a5c49848-4aac-4efa-8acf-7edcee5c2093\") " pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:51 crc kubenswrapper[4787]: I0127 08:04:51.165262 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:51 crc kubenswrapper[4787]: I0127 08:04:51.223289 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk"] Jan 27 08:04:51 crc kubenswrapper[4787]: I0127 08:04:51.432931 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f"] Jan 27 08:04:51 crc kubenswrapper[4787]: W0127 08:04:51.447541 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5c49848_4aac_4efa_8acf_7edcee5c2093.slice/crio-b827e913033f8581661468b8e2587c39c3961653cb81c09184b39cc7b6bfb075 WatchSource:0}: Error finding container b827e913033f8581661468b8e2587c39c3961653cb81c09184b39cc7b6bfb075: Status 404 returned error can't find the container with id b827e913033f8581661468b8e2587c39c3961653cb81c09184b39cc7b6bfb075 Jan 27 08:04:51 crc kubenswrapper[4787]: I0127 08:04:51.796792 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" event={"ID":"a5c49848-4aac-4efa-8acf-7edcee5c2093","Type":"ContainerStarted","Data":"b827e913033f8581661468b8e2587c39c3961653cb81c09184b39cc7b6bfb075"} Jan 27 08:04:51 crc kubenswrapper[4787]: I0127 08:04:51.797985 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" event={"ID":"6d5bf055-d90c-451f-aaf7-19140fcea291","Type":"ContainerStarted","Data":"4d9450fdd357ae7b95b84cb88f28347313189725148bd5ef1fd73de1e380aa98"} Jan 27 08:04:52 crc kubenswrapper[4787]: I0127 08:04:52.823421 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:04:52 crc kubenswrapper[4787]: I0127 08:04:52.823519 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:04:56 crc kubenswrapper[4787]: I0127 08:04:56.848491 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" event={"ID":"a5c49848-4aac-4efa-8acf-7edcee5c2093","Type":"ContainerStarted","Data":"926f125b1130617957d8c188fc86ae5816d96f3b1b4a25546fba0c7933b77bc1"} Jan 27 08:04:56 crc kubenswrapper[4787]: I0127 08:04:56.849115 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:04:56 crc kubenswrapper[4787]: I0127 08:04:56.850007 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" event={"ID":"6d5bf055-d90c-451f-aaf7-19140fcea291","Type":"ContainerStarted","Data":"1811eeec1b01394bf7d9c8c567014e98d1bea80df94e8621f7e3690c74693473"} Jan 27 08:04:56 crc kubenswrapper[4787]: I0127 08:04:56.850271 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:04:56 crc kubenswrapper[4787]: I0127 08:04:56.874802 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" podStartSLOduration=1.997817036 podStartE2EDuration="6.874770222s" podCreationTimestamp="2026-01-27 08:04:50 +0000 UTC" firstStartedPulling="2026-01-27 08:04:51.450393707 +0000 UTC m=+797.102749199" lastFinishedPulling="2026-01-27 08:04:56.327346873 +0000 UTC m=+801.979702385" observedRunningTime="2026-01-27 08:04:56.872088053 +0000 UTC m=+802.524443555" watchObservedRunningTime="2026-01-27 08:04:56.874770222 +0000 UTC m=+802.527125714" Jan 27 08:04:56 crc kubenswrapper[4787]: I0127 08:04:56.899104 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" podStartSLOduration=1.878779843 podStartE2EDuration="6.899078348s" podCreationTimestamp="2026-01-27 08:04:50 +0000 UTC" firstStartedPulling="2026-01-27 08:04:51.272444507 +0000 UTC m=+796.924799999" lastFinishedPulling="2026-01-27 08:04:56.292743012 +0000 UTC m=+801.945098504" observedRunningTime="2026-01-27 08:04:56.895330525 +0000 UTC m=+802.547686047" watchObservedRunningTime="2026-01-27 08:04:56.899078348 +0000 UTC m=+802.551433880" Jan 27 08:05:11 crc kubenswrapper[4787]: I0127 08:05:11.170617 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-78544bb5fb-sxw5f" Jan 27 08:05:22 crc kubenswrapper[4787]: I0127 08:05:22.823270 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:05:22 crc kubenswrapper[4787]: I0127 08:05:22.825630 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:05:30 crc kubenswrapper[4787]: I0127 08:05:30.931956 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-555548cdf7-hwkjk" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.747777 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pp6fl"] Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.750881 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.753442 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5tpq8" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.757916 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.759215 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q"] Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.760492 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.762531 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.766086 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.774339 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q"] Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.847007 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-t2fkx"] Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.847950 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-t2fkx" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.850128 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.850700 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.851002 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-w76xj" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.851281 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.868649 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-k24s7"] Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.870007 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.872864 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.888214 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-k24s7"] Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.934526 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbl5b\" (UniqueName: \"kubernetes.io/projected/49a597ca-3bfc-4377-a49b-19337d609659-kube-api-access-kbl5b\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.935318 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-reloader\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.935394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-metrics\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.935439 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-frr-conf\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.935482 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/49a597ca-3bfc-4377-a49b-19337d609659-frr-startup\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.935505 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8s5m\" (UniqueName: \"kubernetes.io/projected/360229c9-082c-4e85-8800-c5f8717fd8c4-kube-api-access-s8s5m\") pod \"frr-k8s-webhook-server-7df86c4f6c-vfl8q\" (UID: \"360229c9-082c-4e85-8800-c5f8717fd8c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.935564 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/360229c9-082c-4e85-8800-c5f8717fd8c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vfl8q\" (UID: \"360229c9-082c-4e85-8800-c5f8717fd8c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.935591 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-frr-sockets\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:31 crc kubenswrapper[4787]: I0127 08:05:31.935605 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49a597ca-3bfc-4377-a49b-19337d609659-metrics-certs\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.037693 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-frr-conf\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.037764 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f5e61cdf-e660-42e7-b43c-42afb781223b-metallb-excludel2\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.037796 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2glj\" (UniqueName: \"kubernetes.io/projected/f9b9a161-da5c-416b-9713-5fb85ee005fb-kube-api-access-t2glj\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.037822 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/49a597ca-3bfc-4377-a49b-19337d609659-frr-startup\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.037844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8s5m\" (UniqueName: \"kubernetes.io/projected/360229c9-082c-4e85-8800-c5f8717fd8c4-kube-api-access-s8s5m\") pod \"frr-k8s-webhook-server-7df86c4f6c-vfl8q\" (UID: \"360229c9-082c-4e85-8800-c5f8717fd8c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.037868 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9b9a161-da5c-416b-9713-5fb85ee005fb-metrics-certs\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038195 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-frr-conf\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038192 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/360229c9-082c-4e85-8800-c5f8717fd8c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vfl8q\" (UID: \"360229c9-082c-4e85-8800-c5f8717fd8c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038308 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-frr-sockets\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038329 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49a597ca-3bfc-4377-a49b-19337d609659-metrics-certs\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbl5b\" (UniqueName: \"kubernetes.io/projected/49a597ca-3bfc-4377-a49b-19337d609659-kube-api-access-kbl5b\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038399 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-metrics-certs\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038435 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f9b9a161-da5c-416b-9713-5fb85ee005fb-cert\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038467 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-reloader\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038501 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bkqh\" (UniqueName: \"kubernetes.io/projected/f5e61cdf-e660-42e7-b43c-42afb781223b-kube-api-access-8bkqh\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038524 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-metrics\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038544 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038787 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/49a597ca-3bfc-4377-a49b-19337d609659-frr-startup\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.038798 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-frr-sockets\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.039036 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-reloader\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.039183 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/49a597ca-3bfc-4377-a49b-19337d609659-metrics\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.044889 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49a597ca-3bfc-4377-a49b-19337d609659-metrics-certs\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.045055 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/360229c9-082c-4e85-8800-c5f8717fd8c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vfl8q\" (UID: \"360229c9-082c-4e85-8800-c5f8717fd8c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.056571 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbl5b\" (UniqueName: \"kubernetes.io/projected/49a597ca-3bfc-4377-a49b-19337d609659-kube-api-access-kbl5b\") pod \"frr-k8s-pp6fl\" (UID: \"49a597ca-3bfc-4377-a49b-19337d609659\") " pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.071391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8s5m\" (UniqueName: \"kubernetes.io/projected/360229c9-082c-4e85-8800-c5f8717fd8c4-kube-api-access-s8s5m\") pod \"frr-k8s-webhook-server-7df86c4f6c-vfl8q\" (UID: \"360229c9-082c-4e85-8800-c5f8717fd8c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.075777 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.086136 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.139009 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-metrics-certs\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.139073 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f9b9a161-da5c-416b-9713-5fb85ee005fb-cert\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.139110 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bkqh\" (UniqueName: \"kubernetes.io/projected/f5e61cdf-e660-42e7-b43c-42afb781223b-kube-api-access-8bkqh\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.139127 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.139165 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f5e61cdf-e660-42e7-b43c-42afb781223b-metallb-excludel2\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.139186 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2glj\" (UniqueName: \"kubernetes.io/projected/f9b9a161-da5c-416b-9713-5fb85ee005fb-kube-api-access-t2glj\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.139206 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9b9a161-da5c-416b-9713-5fb85ee005fb-metrics-certs\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: E0127 08:05:32.139585 4787 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 08:05:32 crc kubenswrapper[4787]: E0127 08:05:32.139699 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist podName:f5e61cdf-e660-42e7-b43c-42afb781223b nodeName:}" failed. No retries permitted until 2026-01-27 08:05:32.639666676 +0000 UTC m=+838.292022168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist") pod "speaker-t2fkx" (UID: "f5e61cdf-e660-42e7-b43c-42afb781223b") : secret "metallb-memberlist" not found Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.140936 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f5e61cdf-e660-42e7-b43c-42afb781223b-metallb-excludel2\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.144613 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-metrics-certs\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.144719 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f9b9a161-da5c-416b-9713-5fb85ee005fb-cert\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.147603 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9b9a161-da5c-416b-9713-5fb85ee005fb-metrics-certs\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.162669 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2glj\" (UniqueName: \"kubernetes.io/projected/f9b9a161-da5c-416b-9713-5fb85ee005fb-kube-api-access-t2glj\") pod \"controller-6968d8fdc4-k24s7\" (UID: \"f9b9a161-da5c-416b-9713-5fb85ee005fb\") " pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.165785 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bkqh\" (UniqueName: \"kubernetes.io/projected/f5e61cdf-e660-42e7-b43c-42afb781223b-kube-api-access-8bkqh\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.188905 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.360143 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q"] Jan 27 08:05:32 crc kubenswrapper[4787]: W0127 08:05:32.369111 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360229c9_082c_4e85_8800_c5f8717fd8c4.slice/crio-2f4a3a28e0cbef9d370f5a2e53f7b82cc2171d4ebff79a166b4e009c2840f49b WatchSource:0}: Error finding container 2f4a3a28e0cbef9d370f5a2e53f7b82cc2171d4ebff79a166b4e009c2840f49b: Status 404 returned error can't find the container with id 2f4a3a28e0cbef9d370f5a2e53f7b82cc2171d4ebff79a166b4e009c2840f49b Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.643731 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-k24s7"] Jan 27 08:05:32 crc kubenswrapper[4787]: I0127 08:05:32.646663 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:32 crc kubenswrapper[4787]: E0127 08:05:32.646879 4787 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 08:05:32 crc kubenswrapper[4787]: E0127 08:05:32.646999 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist podName:f5e61cdf-e660-42e7-b43c-42afb781223b nodeName:}" failed. No retries permitted until 2026-01-27 08:05:33.64697157 +0000 UTC m=+839.299327062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist") pod "speaker-t2fkx" (UID: "f5e61cdf-e660-42e7-b43c-42afb781223b") : secret "metallb-memberlist" not found Jan 27 08:05:32 crc kubenswrapper[4787]: W0127 08:05:32.649042 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b9a161_da5c_416b_9713_5fb85ee005fb.slice/crio-91c4c8bb93f5387567a51e738a030b0fa6a4c1d18bb5308f214e57fd7c2b7992 WatchSource:0}: Error finding container 91c4c8bb93f5387567a51e738a030b0fa6a4c1d18bb5308f214e57fd7c2b7992: Status 404 returned error can't find the container with id 91c4c8bb93f5387567a51e738a030b0fa6a4c1d18bb5308f214e57fd7c2b7992 Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.106384 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerStarted","Data":"bd2db11db06541ae69844001086e24fe848a05faf780baf0f5d0197556aebea1"} Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.107920 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" event={"ID":"360229c9-082c-4e85-8800-c5f8717fd8c4","Type":"ContainerStarted","Data":"2f4a3a28e0cbef9d370f5a2e53f7b82cc2171d4ebff79a166b4e009c2840f49b"} Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.109823 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k24s7" event={"ID":"f9b9a161-da5c-416b-9713-5fb85ee005fb","Type":"ContainerStarted","Data":"4cfde7e67fee9e3da12be81b7c690a087441866556cb0d0fef2dcddb6eb03d09"} Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.109881 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k24s7" event={"ID":"f9b9a161-da5c-416b-9713-5fb85ee005fb","Type":"ContainerStarted","Data":"6a62a114aff53887f3b5131bff051bafbb25e0eeb7c4055e403d850d42d65bfc"} Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.109894 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k24s7" event={"ID":"f9b9a161-da5c-416b-9713-5fb85ee005fb","Type":"ContainerStarted","Data":"91c4c8bb93f5387567a51e738a030b0fa6a4c1d18bb5308f214e57fd7c2b7992"} Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.109985 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.132522 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-k24s7" podStartSLOduration=2.132498605 podStartE2EDuration="2.132498605s" podCreationTimestamp="2026-01-27 08:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:05:33.128951898 +0000 UTC m=+838.781307420" watchObservedRunningTime="2026-01-27 08:05:33.132498605 +0000 UTC m=+838.784854097" Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.660179 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.669889 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e61cdf-e660-42e7-b43c-42afb781223b-memberlist\") pod \"speaker-t2fkx\" (UID: \"f5e61cdf-e660-42e7-b43c-42afb781223b\") " pod="metallb-system/speaker-t2fkx" Jan 27 08:05:33 crc kubenswrapper[4787]: I0127 08:05:33.963307 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-t2fkx" Jan 27 08:05:34 crc kubenswrapper[4787]: I0127 08:05:34.118653 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t2fkx" event={"ID":"f5e61cdf-e660-42e7-b43c-42afb781223b","Type":"ContainerStarted","Data":"06b472d51d3741a222a2d3df7e9c77bbdf3f613ed002f10c21cef0762fb26979"} Jan 27 08:05:35 crc kubenswrapper[4787]: I0127 08:05:35.147790 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t2fkx" event={"ID":"f5e61cdf-e660-42e7-b43c-42afb781223b","Type":"ContainerStarted","Data":"9e5c7826cef270c18da0240a2bdd27416720a27547a6aed98ca67b82f0b11db0"} Jan 27 08:05:35 crc kubenswrapper[4787]: I0127 08:05:35.148143 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t2fkx" event={"ID":"f5e61cdf-e660-42e7-b43c-42afb781223b","Type":"ContainerStarted","Data":"ca4605ed486e1fb45d91ab9669c2f91299eb61571ccdf48f1099ca84ea11cf56"} Jan 27 08:05:35 crc kubenswrapper[4787]: I0127 08:05:35.148173 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-t2fkx" Jan 27 08:05:35 crc kubenswrapper[4787]: I0127 08:05:35.202814 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-t2fkx" podStartSLOduration=4.202787278 podStartE2EDuration="4.202787278s" podCreationTimestamp="2026-01-27 08:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:05:35.202237256 +0000 UTC m=+840.854592758" watchObservedRunningTime="2026-01-27 08:05:35.202787278 +0000 UTC m=+840.855142770" Jan 27 08:05:41 crc kubenswrapper[4787]: I0127 08:05:41.200134 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" event={"ID":"360229c9-082c-4e85-8800-c5f8717fd8c4","Type":"ContainerStarted","Data":"e2a9ed1e09616e004f2f92ef2b88e97eaa992d3d2c9c461ef3e94a64699eda47"} Jan 27 08:05:41 crc kubenswrapper[4787]: I0127 08:05:41.200884 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:41 crc kubenswrapper[4787]: I0127 08:05:41.202517 4787 generic.go:334] "Generic (PLEG): container finished" podID="49a597ca-3bfc-4377-a49b-19337d609659" containerID="50bcf8228d7ea362a5e757cd344c10dd3d9c7face6a6cc8141f2de6ef3a9e651" exitCode=0 Jan 27 08:05:41 crc kubenswrapper[4787]: I0127 08:05:41.202599 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerDied","Data":"50bcf8228d7ea362a5e757cd344c10dd3d9c7face6a6cc8141f2de6ef3a9e651"} Jan 27 08:05:41 crc kubenswrapper[4787]: I0127 08:05:41.218742 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" podStartSLOduration=2.341479229 podStartE2EDuration="10.218718734s" podCreationTimestamp="2026-01-27 08:05:31 +0000 UTC" firstStartedPulling="2026-01-27 08:05:32.373827974 +0000 UTC m=+838.026183466" lastFinishedPulling="2026-01-27 08:05:40.251067479 +0000 UTC m=+845.903422971" observedRunningTime="2026-01-27 08:05:41.215512163 +0000 UTC m=+846.867867665" watchObservedRunningTime="2026-01-27 08:05:41.218718734 +0000 UTC m=+846.871074226" Jan 27 08:05:42 crc kubenswrapper[4787]: I0127 08:05:42.195159 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-k24s7" Jan 27 08:05:42 crc kubenswrapper[4787]: I0127 08:05:42.214139 4787 generic.go:334] "Generic (PLEG): container finished" podID="49a597ca-3bfc-4377-a49b-19337d609659" containerID="f13de1b16a2b0ec1f03cb3fe91834ff469a9bc16e3421fb4da9faa6607c536f2" exitCode=0 Jan 27 08:05:42 crc kubenswrapper[4787]: I0127 08:05:42.215516 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerDied","Data":"f13de1b16a2b0ec1f03cb3fe91834ff469a9bc16e3421fb4da9faa6607c536f2"} Jan 27 08:05:43 crc kubenswrapper[4787]: I0127 08:05:43.239256 4787 generic.go:334] "Generic (PLEG): container finished" podID="49a597ca-3bfc-4377-a49b-19337d609659" containerID="20d2ac42c475dddad531d4931d49dbc8383f5f9d2b460531af93bb3060b498d9" exitCode=0 Jan 27 08:05:43 crc kubenswrapper[4787]: I0127 08:05:43.239423 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerDied","Data":"20d2ac42c475dddad531d4931d49dbc8383f5f9d2b460531af93bb3060b498d9"} Jan 27 08:05:44 crc kubenswrapper[4787]: I0127 08:05:44.263052 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerStarted","Data":"194910e1faef0b7d8085d8f3869b5f4854c7ed83a27e8a96a3adc010d1d1ac16"} Jan 27 08:05:44 crc kubenswrapper[4787]: I0127 08:05:44.263624 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerStarted","Data":"277912d65d006ba1d755ac834c7ece0e339da4e6e7d41f67f6e0706c4ece1313"} Jan 27 08:05:44 crc kubenswrapper[4787]: I0127 08:05:44.263641 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerStarted","Data":"b3fd2aaee7e391d06dfe092d23d13e02cfb77ef9daeb71f37af3ec85b9fbb70e"} Jan 27 08:05:44 crc kubenswrapper[4787]: I0127 08:05:44.263656 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerStarted","Data":"ffc45d88f8d956c5501fc0b89149767e48468c9e6b207d8762990fdc1405a9b0"} Jan 27 08:05:44 crc kubenswrapper[4787]: I0127 08:05:44.263670 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerStarted","Data":"19da2fd3b4347ba1f6088ec73757bc008c1694a80386070ae63a61fcaeb58790"} Jan 27 08:05:45 crc kubenswrapper[4787]: I0127 08:05:45.278350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pp6fl" event={"ID":"49a597ca-3bfc-4377-a49b-19337d609659","Type":"ContainerStarted","Data":"7ddbbc9d072cf882e76c4feab1a22d3c2d6092306a47f0a04f3b3a03447b44cc"} Jan 27 08:05:45 crc kubenswrapper[4787]: I0127 08:05:45.278872 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:45 crc kubenswrapper[4787]: I0127 08:05:45.307738 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pp6fl" podStartSLOduration=6.352553844 podStartE2EDuration="14.307710336s" podCreationTimestamp="2026-01-27 08:05:31 +0000 UTC" firstStartedPulling="2026-01-27 08:05:32.276833607 +0000 UTC m=+837.929189129" lastFinishedPulling="2026-01-27 08:05:40.231990119 +0000 UTC m=+845.884345621" observedRunningTime="2026-01-27 08:05:45.301469332 +0000 UTC m=+850.953824834" watchObservedRunningTime="2026-01-27 08:05:45.307710336 +0000 UTC m=+850.960065858" Jan 27 08:05:47 crc kubenswrapper[4787]: I0127 08:05:47.089623 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:47 crc kubenswrapper[4787]: I0127 08:05:47.124706 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:05:48 crc kubenswrapper[4787]: I0127 08:05:48.995265 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2v88"] Jan 27 08:05:48 crc kubenswrapper[4787]: I0127 08:05:48.997139 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.012343 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2v88"] Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.052354 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-utilities\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.052678 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5xb\" (UniqueName: \"kubernetes.io/projected/157309b8-f7e0-498b-ab3f-90f05cd34dfd-kube-api-access-7g5xb\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.052846 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-catalog-content\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.153582 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-catalog-content\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.153647 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-utilities\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.153694 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5xb\" (UniqueName: \"kubernetes.io/projected/157309b8-f7e0-498b-ab3f-90f05cd34dfd-kube-api-access-7g5xb\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.154260 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-utilities\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.154260 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-catalog-content\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.172697 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5xb\" (UniqueName: \"kubernetes.io/projected/157309b8-f7e0-498b-ab3f-90f05cd34dfd-kube-api-access-7g5xb\") pod \"community-operators-x2v88\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.326106 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:49 crc kubenswrapper[4787]: I0127 08:05:49.830759 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2v88"] Jan 27 08:05:49 crc kubenswrapper[4787]: W0127 08:05:49.840422 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod157309b8_f7e0_498b_ab3f_90f05cd34dfd.slice/crio-861fbb815ca629201014bdc0a87c1bdc00f076dcd52a0a4b5e5bbd54e56fb253 WatchSource:0}: Error finding container 861fbb815ca629201014bdc0a87c1bdc00f076dcd52a0a4b5e5bbd54e56fb253: Status 404 returned error can't find the container with id 861fbb815ca629201014bdc0a87c1bdc00f076dcd52a0a4b5e5bbd54e56fb253 Jan 27 08:05:50 crc kubenswrapper[4787]: I0127 08:05:50.315745 4787 generic.go:334] "Generic (PLEG): container finished" podID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerID="783a62d9c4efde8e927f42db883dfc3694395b59698a43664bbaa75028b9fedb" exitCode=0 Jan 27 08:05:50 crc kubenswrapper[4787]: I0127 08:05:50.315880 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2v88" event={"ID":"157309b8-f7e0-498b-ab3f-90f05cd34dfd","Type":"ContainerDied","Data":"783a62d9c4efde8e927f42db883dfc3694395b59698a43664bbaa75028b9fedb"} Jan 27 08:05:50 crc kubenswrapper[4787]: I0127 08:05:50.317263 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2v88" event={"ID":"157309b8-f7e0-498b-ab3f-90f05cd34dfd","Type":"ContainerStarted","Data":"861fbb815ca629201014bdc0a87c1bdc00f076dcd52a0a4b5e5bbd54e56fb253"} Jan 27 08:05:51 crc kubenswrapper[4787]: I0127 08:05:51.327160 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2v88" event={"ID":"157309b8-f7e0-498b-ab3f-90f05cd34dfd","Type":"ContainerStarted","Data":"d3c461dc67225f8ec7d2fd953bf5df3c4ba58404084ae0214331e82ea98e7c63"} Jan 27 08:05:52 crc kubenswrapper[4787]: I0127 08:05:52.096127 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vfl8q" Jan 27 08:05:52 crc kubenswrapper[4787]: I0127 08:05:52.335466 4787 generic.go:334] "Generic (PLEG): container finished" podID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerID="d3c461dc67225f8ec7d2fd953bf5df3c4ba58404084ae0214331e82ea98e7c63" exitCode=0 Jan 27 08:05:52 crc kubenswrapper[4787]: I0127 08:05:52.335535 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2v88" event={"ID":"157309b8-f7e0-498b-ab3f-90f05cd34dfd","Type":"ContainerDied","Data":"d3c461dc67225f8ec7d2fd953bf5df3c4ba58404084ae0214331e82ea98e7c63"} Jan 27 08:05:52 crc kubenswrapper[4787]: I0127 08:05:52.823737 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:05:52 crc kubenswrapper[4787]: I0127 08:05:52.823820 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:05:52 crc kubenswrapper[4787]: I0127 08:05:52.823874 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 08:05:52 crc kubenswrapper[4787]: I0127 08:05:52.824643 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76c11a5da51cedd24f4d664015e61242e532ce42823eb519b69423acc27d454d"} pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:05:52 crc kubenswrapper[4787]: I0127 08:05:52.824713 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" containerID="cri-o://76c11a5da51cedd24f4d664015e61242e532ce42823eb519b69423acc27d454d" gracePeriod=600 Jan 27 08:05:53 crc kubenswrapper[4787]: I0127 08:05:53.346371 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2v88" event={"ID":"157309b8-f7e0-498b-ab3f-90f05cd34dfd","Type":"ContainerStarted","Data":"9df41c8f928a5bb0401c44c1e75deb1e86678734d9c68d143045d3bb5b5ca180"} Jan 27 08:05:53 crc kubenswrapper[4787]: I0127 08:05:53.349291 4787 generic.go:334] "Generic (PLEG): container finished" podID="f051e184-acac-47cf-9e04-9df648288715" containerID="76c11a5da51cedd24f4d664015e61242e532ce42823eb519b69423acc27d454d" exitCode=0 Jan 27 08:05:53 crc kubenswrapper[4787]: I0127 08:05:53.349356 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerDied","Data":"76c11a5da51cedd24f4d664015e61242e532ce42823eb519b69423acc27d454d"} Jan 27 08:05:53 crc kubenswrapper[4787]: I0127 08:05:53.349385 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"8407f8fd138ff9e300a7e8488e02cc30a66d50ed89a7c4ef1d3339c94e2a22b5"} Jan 27 08:05:53 crc kubenswrapper[4787]: I0127 08:05:53.349408 4787 scope.go:117] "RemoveContainer" containerID="e4c1a6fd72ac92bada26c4a251440b894a7362ee02ad76996c3668ab0c3d7b09" Jan 27 08:05:53 crc kubenswrapper[4787]: I0127 08:05:53.375483 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2v88" podStartSLOduration=2.89044263 podStartE2EDuration="5.37545487s" podCreationTimestamp="2026-01-27 08:05:48 +0000 UTC" firstStartedPulling="2026-01-27 08:05:50.319913908 +0000 UTC m=+855.972269410" lastFinishedPulling="2026-01-27 08:05:52.804926138 +0000 UTC m=+858.457281650" observedRunningTime="2026-01-27 08:05:53.366284108 +0000 UTC m=+859.018639610" watchObservedRunningTime="2026-01-27 08:05:53.37545487 +0000 UTC m=+859.027810372" Jan 27 08:05:53 crc kubenswrapper[4787]: I0127 08:05:53.969292 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-t2fkx" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.610924 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq"] Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.612650 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.615156 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.630881 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq"] Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.756601 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpshb\" (UniqueName: \"kubernetes.io/projected/9ba6b24a-de23-4528-87fc-c2932e3beb6e-kube-api-access-vpshb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.756668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.756989 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.858902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpshb\" (UniqueName: \"kubernetes.io/projected/9ba6b24a-de23-4528-87fc-c2932e3beb6e-kube-api-access-vpshb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.858974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.859040 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.859695 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.859808 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.889457 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpshb\" (UniqueName: \"kubernetes.io/projected/9ba6b24a-de23-4528-87fc-c2932e3beb6e-kube-api-access-vpshb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:55 crc kubenswrapper[4787]: I0127 08:05:55.933094 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:05:56 crc kubenswrapper[4787]: I0127 08:05:56.157119 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq"] Jan 27 08:05:56 crc kubenswrapper[4787]: I0127 08:05:56.379460 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" event={"ID":"9ba6b24a-de23-4528-87fc-c2932e3beb6e","Type":"ContainerStarted","Data":"5f29ea04aece039d214830a95207446972a3ea9f53bf9cf1c752519753e9da5b"} Jan 27 08:05:56 crc kubenswrapper[4787]: I0127 08:05:56.379998 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" event={"ID":"9ba6b24a-de23-4528-87fc-c2932e3beb6e","Type":"ContainerStarted","Data":"10f9fd4a391efed59786dfdf11a4ceeac9813a3b731165bbc936f972ec892204"} Jan 27 08:05:57 crc kubenswrapper[4787]: I0127 08:05:57.388985 4787 generic.go:334] "Generic (PLEG): container finished" podID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerID="5f29ea04aece039d214830a95207446972a3ea9f53bf9cf1c752519753e9da5b" exitCode=0 Jan 27 08:05:57 crc kubenswrapper[4787]: I0127 08:05:57.389073 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" event={"ID":"9ba6b24a-de23-4528-87fc-c2932e3beb6e","Type":"ContainerDied","Data":"5f29ea04aece039d214830a95207446972a3ea9f53bf9cf1c752519753e9da5b"} Jan 27 08:05:59 crc kubenswrapper[4787]: I0127 08:05:59.326250 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:59 crc kubenswrapper[4787]: I0127 08:05:59.327366 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:59 crc kubenswrapper[4787]: I0127 08:05:59.388109 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:05:59 crc kubenswrapper[4787]: I0127 08:05:59.442293 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:06:01 crc kubenswrapper[4787]: I0127 08:06:01.759839 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2v88"] Jan 27 08:06:02 crc kubenswrapper[4787]: I0127 08:06:02.079455 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pp6fl" Jan 27 08:06:02 crc kubenswrapper[4787]: I0127 08:06:02.426772 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" event={"ID":"9ba6b24a-de23-4528-87fc-c2932e3beb6e","Type":"ContainerStarted","Data":"07d22ab0377a276f80adc6d2b854c48196d36474e43eac44056f1d9c8569fcff"} Jan 27 08:06:02 crc kubenswrapper[4787]: I0127 08:06:02.426934 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x2v88" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerName="registry-server" containerID="cri-o://9df41c8f928a5bb0401c44c1e75deb1e86678734d9c68d143045d3bb5b5ca180" gracePeriod=2 Jan 27 08:06:03 crc kubenswrapper[4787]: I0127 08:06:03.436365 4787 generic.go:334] "Generic (PLEG): container finished" podID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerID="07d22ab0377a276f80adc6d2b854c48196d36474e43eac44056f1d9c8569fcff" exitCode=0 Jan 27 08:06:03 crc kubenswrapper[4787]: I0127 08:06:03.436442 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" event={"ID":"9ba6b24a-de23-4528-87fc-c2932e3beb6e","Type":"ContainerDied","Data":"07d22ab0377a276f80adc6d2b854c48196d36474e43eac44056f1d9c8569fcff"} Jan 27 08:06:03 crc kubenswrapper[4787]: I0127 08:06:03.447655 4787 generic.go:334] "Generic (PLEG): container finished" podID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerID="9df41c8f928a5bb0401c44c1e75deb1e86678734d9c68d143045d3bb5b5ca180" exitCode=0 Jan 27 08:06:03 crc kubenswrapper[4787]: I0127 08:06:03.447718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2v88" event={"ID":"157309b8-f7e0-498b-ab3f-90f05cd34dfd","Type":"ContainerDied","Data":"9df41c8f928a5bb0401c44c1e75deb1e86678734d9c68d143045d3bb5b5ca180"} Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.327471 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.458913 4787 generic.go:334] "Generic (PLEG): container finished" podID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerID="cefff68a6e6276878982ab6078637ced540e5b2f060fdec7c81331b0d7a9c76b" exitCode=0 Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.459010 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" event={"ID":"9ba6b24a-de23-4528-87fc-c2932e3beb6e","Type":"ContainerDied","Data":"cefff68a6e6276878982ab6078637ced540e5b2f060fdec7c81331b0d7a9c76b"} Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.462143 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2v88" event={"ID":"157309b8-f7e0-498b-ab3f-90f05cd34dfd","Type":"ContainerDied","Data":"861fbb815ca629201014bdc0a87c1bdc00f076dcd52a0a4b5e5bbd54e56fb253"} Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.462346 4787 scope.go:117] "RemoveContainer" containerID="9df41c8f928a5bb0401c44c1e75deb1e86678734d9c68d143045d3bb5b5ca180" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.462214 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2v88" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.496025 4787 scope.go:117] "RemoveContainer" containerID="d3c461dc67225f8ec7d2fd953bf5df3c4ba58404084ae0214331e82ea98e7c63" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.513909 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-utilities\") pod \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.514114 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-catalog-content\") pod \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.514174 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5xb\" (UniqueName: \"kubernetes.io/projected/157309b8-f7e0-498b-ab3f-90f05cd34dfd-kube-api-access-7g5xb\") pod \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\" (UID: \"157309b8-f7e0-498b-ab3f-90f05cd34dfd\") " Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.515107 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-utilities" (OuterVolumeSpecName: "utilities") pod "157309b8-f7e0-498b-ab3f-90f05cd34dfd" (UID: "157309b8-f7e0-498b-ab3f-90f05cd34dfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.521537 4787 scope.go:117] "RemoveContainer" containerID="783a62d9c4efde8e927f42db883dfc3694395b59698a43664bbaa75028b9fedb" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.521860 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157309b8-f7e0-498b-ab3f-90f05cd34dfd-kube-api-access-7g5xb" (OuterVolumeSpecName: "kube-api-access-7g5xb") pod "157309b8-f7e0-498b-ab3f-90f05cd34dfd" (UID: "157309b8-f7e0-498b-ab3f-90f05cd34dfd"). InnerVolumeSpecName "kube-api-access-7g5xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.604379 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "157309b8-f7e0-498b-ab3f-90f05cd34dfd" (UID: "157309b8-f7e0-498b-ab3f-90f05cd34dfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.615925 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.615955 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g5xb\" (UniqueName: \"kubernetes.io/projected/157309b8-f7e0-498b-ab3f-90f05cd34dfd-kube-api-access-7g5xb\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.615968 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/157309b8-f7e0-498b-ab3f-90f05cd34dfd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.802081 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2v88"] Jan 27 08:06:04 crc kubenswrapper[4787]: I0127 08:06:04.807643 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x2v88"] Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.086881 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" path="/var/lib/kubelet/pods/157309b8-f7e0-498b-ab3f-90f05cd34dfd/volumes" Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.789467 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.837202 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-util\") pod \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.837332 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-bundle\") pod \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.837383 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpshb\" (UniqueName: \"kubernetes.io/projected/9ba6b24a-de23-4528-87fc-c2932e3beb6e-kube-api-access-vpshb\") pod \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\" (UID: \"9ba6b24a-de23-4528-87fc-c2932e3beb6e\") " Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.839071 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-bundle" (OuterVolumeSpecName: "bundle") pod "9ba6b24a-de23-4528-87fc-c2932e3beb6e" (UID: "9ba6b24a-de23-4528-87fc-c2932e3beb6e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.845773 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba6b24a-de23-4528-87fc-c2932e3beb6e-kube-api-access-vpshb" (OuterVolumeSpecName: "kube-api-access-vpshb") pod "9ba6b24a-de23-4528-87fc-c2932e3beb6e" (UID: "9ba6b24a-de23-4528-87fc-c2932e3beb6e"). InnerVolumeSpecName "kube-api-access-vpshb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.863091 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-util" (OuterVolumeSpecName: "util") pod "9ba6b24a-de23-4528-87fc-c2932e3beb6e" (UID: "9ba6b24a-de23-4528-87fc-c2932e3beb6e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.938437 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.938475 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ba6b24a-de23-4528-87fc-c2932e3beb6e-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:05 crc kubenswrapper[4787]: I0127 08:06:05.938487 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpshb\" (UniqueName: \"kubernetes.io/projected/9ba6b24a-de23-4528-87fc-c2932e3beb6e-kube-api-access-vpshb\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:06 crc kubenswrapper[4787]: I0127 08:06:06.489540 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" event={"ID":"9ba6b24a-de23-4528-87fc-c2932e3beb6e","Type":"ContainerDied","Data":"10f9fd4a391efed59786dfdf11a4ceeac9813a3b731165bbc936f972ec892204"} Jan 27 08:06:06 crc kubenswrapper[4787]: I0127 08:06:06.489752 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10f9fd4a391efed59786dfdf11a4ceeac9813a3b731165bbc936f972ec892204" Jan 27 08:06:06 crc kubenswrapper[4787]: I0127 08:06:06.489675 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.393053 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt"] Jan 27 08:06:08 crc kubenswrapper[4787]: E0127 08:06:08.393757 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerName="pull" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.393771 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerName="pull" Jan 27 08:06:08 crc kubenswrapper[4787]: E0127 08:06:08.393786 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerName="registry-server" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.393793 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerName="registry-server" Jan 27 08:06:08 crc kubenswrapper[4787]: E0127 08:06:08.393802 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerName="util" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.393809 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerName="util" Jan 27 08:06:08 crc kubenswrapper[4787]: E0127 08:06:08.393825 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerName="extract" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.393831 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerName="extract" Jan 27 08:06:08 crc kubenswrapper[4787]: E0127 08:06:08.393849 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerName="extract-utilities" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.393855 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerName="extract-utilities" Jan 27 08:06:08 crc kubenswrapper[4787]: E0127 08:06:08.393865 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerName="extract-content" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.393871 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerName="extract-content" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.393985 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba6b24a-de23-4528-87fc-c2932e3beb6e" containerName="extract" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.394001 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="157309b8-f7e0-498b-ab3f-90f05cd34dfd" containerName="registry-server" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.394441 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.396772 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.396948 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-q2bbt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.397825 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.450607 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt"] Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.472769 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7fr\" (UniqueName: \"kubernetes.io/projected/1f790347-7bae-4cc6-8b0b-956ecc8b0a41-kube-api-access-mm7fr\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mxzbt\" (UID: \"1f790347-7bae-4cc6-8b0b-956ecc8b0a41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.472847 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f790347-7bae-4cc6-8b0b-956ecc8b0a41-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mxzbt\" (UID: \"1f790347-7bae-4cc6-8b0b-956ecc8b0a41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.574394 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm7fr\" (UniqueName: \"kubernetes.io/projected/1f790347-7bae-4cc6-8b0b-956ecc8b0a41-kube-api-access-mm7fr\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mxzbt\" (UID: \"1f790347-7bae-4cc6-8b0b-956ecc8b0a41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.574462 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f790347-7bae-4cc6-8b0b-956ecc8b0a41-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mxzbt\" (UID: \"1f790347-7bae-4cc6-8b0b-956ecc8b0a41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.574985 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f790347-7bae-4cc6-8b0b-956ecc8b0a41-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mxzbt\" (UID: \"1f790347-7bae-4cc6-8b0b-956ecc8b0a41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.597708 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm7fr\" (UniqueName: \"kubernetes.io/projected/1f790347-7bae-4cc6-8b0b-956ecc8b0a41-kube-api-access-mm7fr\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mxzbt\" (UID: \"1f790347-7bae-4cc6-8b0b-956ecc8b0a41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" Jan 27 08:06:08 crc kubenswrapper[4787]: I0127 08:06:08.711770 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" Jan 27 08:06:09 crc kubenswrapper[4787]: I0127 08:06:09.041980 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt"] Jan 27 08:06:09 crc kubenswrapper[4787]: I0127 08:06:09.513683 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" event={"ID":"1f790347-7bae-4cc6-8b0b-956ecc8b0a41","Type":"ContainerStarted","Data":"fa8269368f4ad340379bce26698e26209d945ebc33bf412edd182480b6803ef6"} Jan 27 08:06:16 crc kubenswrapper[4787]: I0127 08:06:16.567947 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" event={"ID":"1f790347-7bae-4cc6-8b0b-956ecc8b0a41","Type":"ContainerStarted","Data":"1e76dcb6d00d74c3f639e57d6437a647e90713da3c239d31bb4e0370ef6f94f3"} Jan 27 08:06:16 crc kubenswrapper[4787]: I0127 08:06:16.590883 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mxzbt" podStartSLOduration=1.2664676 podStartE2EDuration="8.59086089s" podCreationTimestamp="2026-01-27 08:06:08 +0000 UTC" firstStartedPulling="2026-01-27 08:06:09.051688252 +0000 UTC m=+874.704043744" lastFinishedPulling="2026-01-27 08:06:16.376081502 +0000 UTC m=+882.028437034" observedRunningTime="2026-01-27 08:06:16.587424031 +0000 UTC m=+882.239779533" watchObservedRunningTime="2026-01-27 08:06:16.59086089 +0000 UTC m=+882.243216392" Jan 27 08:06:22 crc kubenswrapper[4787]: I0127 08:06:22.987930 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lfhld"] Jan 27 08:06:22 crc kubenswrapper[4787]: I0127 08:06:22.989737 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:22 crc kubenswrapper[4787]: I0127 08:06:22.993067 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dd48q" Jan 27 08:06:22 crc kubenswrapper[4787]: I0127 08:06:22.993098 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 08:06:22 crc kubenswrapper[4787]: I0127 08:06:22.993075 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.001313 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lfhld"] Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.108477 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aff129cd-50c9-4f2a-b70d-d2066197a73b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lfhld\" (UID: \"aff129cd-50c9-4f2a-b70d-d2066197a73b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.108596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m7cf\" (UniqueName: \"kubernetes.io/projected/aff129cd-50c9-4f2a-b70d-d2066197a73b-kube-api-access-6m7cf\") pod \"cert-manager-webhook-f4fb5df64-lfhld\" (UID: \"aff129cd-50c9-4f2a-b70d-d2066197a73b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.210406 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aff129cd-50c9-4f2a-b70d-d2066197a73b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lfhld\" (UID: \"aff129cd-50c9-4f2a-b70d-d2066197a73b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.210500 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m7cf\" (UniqueName: \"kubernetes.io/projected/aff129cd-50c9-4f2a-b70d-d2066197a73b-kube-api-access-6m7cf\") pod \"cert-manager-webhook-f4fb5df64-lfhld\" (UID: \"aff129cd-50c9-4f2a-b70d-d2066197a73b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.234881 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m7cf\" (UniqueName: \"kubernetes.io/projected/aff129cd-50c9-4f2a-b70d-d2066197a73b-kube-api-access-6m7cf\") pod \"cert-manager-webhook-f4fb5df64-lfhld\" (UID: \"aff129cd-50c9-4f2a-b70d-d2066197a73b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.237698 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aff129cd-50c9-4f2a-b70d-d2066197a73b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-lfhld\" (UID: \"aff129cd-50c9-4f2a-b70d-d2066197a73b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.308313 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.432928 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h"] Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.438701 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.441892 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6zlbf" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.458142 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h"] Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.518283 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56feb43c-1e68-4770-94c0-e22943ca4313-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-k4x6h\" (UID: \"56feb43c-1e68-4770-94c0-e22943ca4313\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.518632 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvvnp\" (UniqueName: \"kubernetes.io/projected/56feb43c-1e68-4770-94c0-e22943ca4313-kube-api-access-hvvnp\") pod \"cert-manager-cainjector-855d9ccff4-k4x6h\" (UID: \"56feb43c-1e68-4770-94c0-e22943ca4313\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.620050 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56feb43c-1e68-4770-94c0-e22943ca4313-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-k4x6h\" (UID: \"56feb43c-1e68-4770-94c0-e22943ca4313\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.620161 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvvnp\" (UniqueName: \"kubernetes.io/projected/56feb43c-1e68-4770-94c0-e22943ca4313-kube-api-access-hvvnp\") pod \"cert-manager-cainjector-855d9ccff4-k4x6h\" (UID: \"56feb43c-1e68-4770-94c0-e22943ca4313\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.642111 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvvnp\" (UniqueName: \"kubernetes.io/projected/56feb43c-1e68-4770-94c0-e22943ca4313-kube-api-access-hvvnp\") pod \"cert-manager-cainjector-855d9ccff4-k4x6h\" (UID: \"56feb43c-1e68-4770-94c0-e22943ca4313\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.642352 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56feb43c-1e68-4770-94c0-e22943ca4313-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-k4x6h\" (UID: \"56feb43c-1e68-4770-94c0-e22943ca4313\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.742626 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-lfhld"] Jan 27 08:06:23 crc kubenswrapper[4787]: W0127 08:06:23.754811 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaff129cd_50c9_4f2a_b70d_d2066197a73b.slice/crio-9cfc1afd1a852cd3af2bc57061b7546cceb15f0a01c2cdf0d4da91f0e44d3571 WatchSource:0}: Error finding container 9cfc1afd1a852cd3af2bc57061b7546cceb15f0a01c2cdf0d4da91f0e44d3571: Status 404 returned error can't find the container with id 9cfc1afd1a852cd3af2bc57061b7546cceb15f0a01c2cdf0d4da91f0e44d3571 Jan 27 08:06:23 crc kubenswrapper[4787]: I0127 08:06:23.781119 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" Jan 27 08:06:24 crc kubenswrapper[4787]: I0127 08:06:24.037730 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h"] Jan 27 08:06:24 crc kubenswrapper[4787]: I0127 08:06:24.625589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" event={"ID":"56feb43c-1e68-4770-94c0-e22943ca4313","Type":"ContainerStarted","Data":"ddb2971ae2abb1847af8ff7f3fe30349115a7e84994b616b0b5c5987592c4701"} Jan 27 08:06:24 crc kubenswrapper[4787]: I0127 08:06:24.626660 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" event={"ID":"aff129cd-50c9-4f2a-b70d-d2066197a73b","Type":"ContainerStarted","Data":"9cfc1afd1a852cd3af2bc57061b7546cceb15f0a01c2cdf0d4da91f0e44d3571"} Jan 27 08:06:34 crc kubenswrapper[4787]: I0127 08:06:34.708980 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" event={"ID":"56feb43c-1e68-4770-94c0-e22943ca4313","Type":"ContainerStarted","Data":"d76093a0255b2cc26e714d8e067b83d63d070ebc6aadfc15e7d53c5195da95a1"} Jan 27 08:06:34 crc kubenswrapper[4787]: I0127 08:06:34.711980 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" event={"ID":"aff129cd-50c9-4f2a-b70d-d2066197a73b","Type":"ContainerStarted","Data":"6a8147ec7b40a4cf0374c7e4a011f52b62883274ae45bca8372a496e00b7628a"} Jan 27 08:06:34 crc kubenswrapper[4787]: I0127 08:06:34.712168 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:34 crc kubenswrapper[4787]: I0127 08:06:34.735011 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-k4x6h" podStartSLOduration=1.278180528 podStartE2EDuration="11.734968361s" podCreationTimestamp="2026-01-27 08:06:23 +0000 UTC" firstStartedPulling="2026-01-27 08:06:24.041405752 +0000 UTC m=+889.693761244" lastFinishedPulling="2026-01-27 08:06:34.498193585 +0000 UTC m=+900.150549077" observedRunningTime="2026-01-27 08:06:34.731527562 +0000 UTC m=+900.383883074" watchObservedRunningTime="2026-01-27 08:06:34.734968361 +0000 UTC m=+900.387323853" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.072415 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" podStartSLOduration=3.29412206 podStartE2EDuration="14.072386776s" podCreationTimestamp="2026-01-27 08:06:22 +0000 UTC" firstStartedPulling="2026-01-27 08:06:23.758736502 +0000 UTC m=+889.411091994" lastFinishedPulling="2026-01-27 08:06:34.537001218 +0000 UTC m=+900.189356710" observedRunningTime="2026-01-27 08:06:34.763407807 +0000 UTC m=+900.415763299" watchObservedRunningTime="2026-01-27 08:06:36.072386776 +0000 UTC m=+901.724742288" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.078669 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5nrrg"] Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.080705 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.093010 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5nrrg"] Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.231540 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb2tf\" (UniqueName: \"kubernetes.io/projected/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-kube-api-access-mb2tf\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.231956 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-utilities\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.232071 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-catalog-content\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.334212 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-utilities\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.334339 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-catalog-content\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.334425 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb2tf\" (UniqueName: \"kubernetes.io/projected/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-kube-api-access-mb2tf\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.334847 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-utilities\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.335178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-catalog-content\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.366143 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb2tf\" (UniqueName: \"kubernetes.io/projected/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-kube-api-access-mb2tf\") pod \"certified-operators-5nrrg\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.449275 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:36 crc kubenswrapper[4787]: I0127 08:06:36.923264 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5nrrg"] Jan 27 08:06:37 crc kubenswrapper[4787]: I0127 08:06:37.737785 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nrrg" event={"ID":"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c","Type":"ContainerStarted","Data":"53cd4e630003b6798c9e5e8039883a25323c89f632886d1ab420bebf4450bb8f"} Jan 27 08:06:38 crc kubenswrapper[4787]: I0127 08:06:38.746541 4787 generic.go:334] "Generic (PLEG): container finished" podID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerID="b5e4cd5270b6e96f6cb6daa829d6d67b9d5b6ecd4d86e4f5b96e9b21d3b9dfd3" exitCode=0 Jan 27 08:06:38 crc kubenswrapper[4787]: I0127 08:06:38.746708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nrrg" event={"ID":"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c","Type":"ContainerDied","Data":"b5e4cd5270b6e96f6cb6daa829d6d67b9d5b6ecd4d86e4f5b96e9b21d3b9dfd3"} Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.766186 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-vbzqb"] Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.767225 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-vbzqb" Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.770861 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-gqk8b" Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.823643 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mhv\" (UniqueName: \"kubernetes.io/projected/2219abac-7659-407a-b36f-cd0160e70c25-kube-api-access-27mhv\") pod \"cert-manager-86cb77c54b-vbzqb\" (UID: \"2219abac-7659-407a-b36f-cd0160e70c25\") " pod="cert-manager/cert-manager-86cb77c54b-vbzqb" Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.823681 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2219abac-7659-407a-b36f-cd0160e70c25-bound-sa-token\") pod \"cert-manager-86cb77c54b-vbzqb\" (UID: \"2219abac-7659-407a-b36f-cd0160e70c25\") " pod="cert-manager/cert-manager-86cb77c54b-vbzqb" Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.828130 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-vbzqb"] Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.925246 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mhv\" (UniqueName: \"kubernetes.io/projected/2219abac-7659-407a-b36f-cd0160e70c25-kube-api-access-27mhv\") pod \"cert-manager-86cb77c54b-vbzqb\" (UID: \"2219abac-7659-407a-b36f-cd0160e70c25\") " pod="cert-manager/cert-manager-86cb77c54b-vbzqb" Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.925299 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2219abac-7659-407a-b36f-cd0160e70c25-bound-sa-token\") pod \"cert-manager-86cb77c54b-vbzqb\" (UID: \"2219abac-7659-407a-b36f-cd0160e70c25\") " pod="cert-manager/cert-manager-86cb77c54b-vbzqb" Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.947918 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2219abac-7659-407a-b36f-cd0160e70c25-bound-sa-token\") pod \"cert-manager-86cb77c54b-vbzqb\" (UID: \"2219abac-7659-407a-b36f-cd0160e70c25\") " pod="cert-manager/cert-manager-86cb77c54b-vbzqb" Jan 27 08:06:39 crc kubenswrapper[4787]: I0127 08:06:39.948784 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mhv\" (UniqueName: \"kubernetes.io/projected/2219abac-7659-407a-b36f-cd0160e70c25-kube-api-access-27mhv\") pod \"cert-manager-86cb77c54b-vbzqb\" (UID: \"2219abac-7659-407a-b36f-cd0160e70c25\") " pod="cert-manager/cert-manager-86cb77c54b-vbzqb" Jan 27 08:06:40 crc kubenswrapper[4787]: I0127 08:06:40.138323 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-vbzqb" Jan 27 08:06:40 crc kubenswrapper[4787]: I0127 08:06:40.574759 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-vbzqb"] Jan 27 08:06:40 crc kubenswrapper[4787]: W0127 08:06:40.577797 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2219abac_7659_407a_b36f_cd0160e70c25.slice/crio-7b58213c72e31a3d918d39506e7cee3aec8c402df4d927ef8924d3623b10494c WatchSource:0}: Error finding container 7b58213c72e31a3d918d39506e7cee3aec8c402df4d927ef8924d3623b10494c: Status 404 returned error can't find the container with id 7b58213c72e31a3d918d39506e7cee3aec8c402df4d927ef8924d3623b10494c Jan 27 08:06:40 crc kubenswrapper[4787]: I0127 08:06:40.768613 4787 generic.go:334] "Generic (PLEG): container finished" podID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerID="08b4f62655ac0067d486c33926cf62c3c39cdc3fe5830d59c54b976d0643b999" exitCode=0 Jan 27 08:06:40 crc kubenswrapper[4787]: I0127 08:06:40.768659 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nrrg" event={"ID":"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c","Type":"ContainerDied","Data":"08b4f62655ac0067d486c33926cf62c3c39cdc3fe5830d59c54b976d0643b999"} Jan 27 08:06:40 crc kubenswrapper[4787]: I0127 08:06:40.770543 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-vbzqb" event={"ID":"2219abac-7659-407a-b36f-cd0160e70c25","Type":"ContainerStarted","Data":"5debd36c6aeca9de3a9e1f468c032b83891e65e03e7a9cf75e4bc2f6ec4e4063"} Jan 27 08:06:40 crc kubenswrapper[4787]: I0127 08:06:40.770601 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-vbzqb" event={"ID":"2219abac-7659-407a-b36f-cd0160e70c25","Type":"ContainerStarted","Data":"7b58213c72e31a3d918d39506e7cee3aec8c402df4d927ef8924d3623b10494c"} Jan 27 08:06:40 crc kubenswrapper[4787]: I0127 08:06:40.821683 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-vbzqb" podStartSLOduration=1.821651228 podStartE2EDuration="1.821651228s" podCreationTimestamp="2026-01-27 08:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:06:40.813478592 +0000 UTC m=+906.465834094" watchObservedRunningTime="2026-01-27 08:06:40.821651228 +0000 UTC m=+906.474006750" Jan 27 08:06:42 crc kubenswrapper[4787]: I0127 08:06:42.787482 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nrrg" event={"ID":"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c","Type":"ContainerStarted","Data":"4d80e93e40acb24a06e931eb2842625cf5854351ef02789c3d3a51d8155a8f6e"} Jan 27 08:06:42 crc kubenswrapper[4787]: I0127 08:06:42.808956 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5nrrg" podStartSLOduration=3.371942396 podStartE2EDuration="6.808924206s" podCreationTimestamp="2026-01-27 08:06:36 +0000 UTC" firstStartedPulling="2026-01-27 08:06:38.749173196 +0000 UTC m=+904.401528688" lastFinishedPulling="2026-01-27 08:06:42.186155006 +0000 UTC m=+907.838510498" observedRunningTime="2026-01-27 08:06:42.804517745 +0000 UTC m=+908.456873247" watchObservedRunningTime="2026-01-27 08:06:42.808924206 +0000 UTC m=+908.461279698" Jan 27 08:06:43 crc kubenswrapper[4787]: I0127 08:06:43.313518 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-lfhld" Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.450047 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.450509 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.535471 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.984674 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jlpfk"] Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.986730 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jlpfk" Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.990209 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.990267 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.990439 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-l8j6d" Jan 27 08:06:46 crc kubenswrapper[4787]: I0127 08:06:46.994360 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jlpfk"] Jan 27 08:06:47 crc kubenswrapper[4787]: I0127 08:06:47.062764 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv7q6\" (UniqueName: \"kubernetes.io/projected/c3c0535e-f30f-45de-9722-9431520772a2-kube-api-access-zv7q6\") pod \"openstack-operator-index-jlpfk\" (UID: \"c3c0535e-f30f-45de-9722-9431520772a2\") " pod="openstack-operators/openstack-operator-index-jlpfk" Jan 27 08:06:47 crc kubenswrapper[4787]: I0127 08:06:47.164182 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv7q6\" (UniqueName: \"kubernetes.io/projected/c3c0535e-f30f-45de-9722-9431520772a2-kube-api-access-zv7q6\") pod \"openstack-operator-index-jlpfk\" (UID: \"c3c0535e-f30f-45de-9722-9431520772a2\") " pod="openstack-operators/openstack-operator-index-jlpfk" Jan 27 08:06:47 crc kubenswrapper[4787]: I0127 08:06:47.190083 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv7q6\" (UniqueName: \"kubernetes.io/projected/c3c0535e-f30f-45de-9722-9431520772a2-kube-api-access-zv7q6\") pod \"openstack-operator-index-jlpfk\" (UID: \"c3c0535e-f30f-45de-9722-9431520772a2\") " pod="openstack-operators/openstack-operator-index-jlpfk" Jan 27 08:06:47 crc kubenswrapper[4787]: I0127 08:06:47.312623 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jlpfk" Jan 27 08:06:47 crc kubenswrapper[4787]: I0127 08:06:47.758538 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jlpfk"] Jan 27 08:06:47 crc kubenswrapper[4787]: I0127 08:06:47.828178 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jlpfk" event={"ID":"c3c0535e-f30f-45de-9722-9431520772a2","Type":"ContainerStarted","Data":"9cf1698799f8600b465f5d25e75ac44f2338a58601c45a62f35b4c50e4a809ef"} Jan 27 08:06:50 crc kubenswrapper[4787]: I0127 08:06:50.854530 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jlpfk" event={"ID":"c3c0535e-f30f-45de-9722-9431520772a2","Type":"ContainerStarted","Data":"2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93"} Jan 27 08:06:50 crc kubenswrapper[4787]: I0127 08:06:50.870156 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jlpfk" podStartSLOduration=2.798281283 podStartE2EDuration="4.870135846s" podCreationTimestamp="2026-01-27 08:06:46 +0000 UTC" firstStartedPulling="2026-01-27 08:06:47.764645828 +0000 UTC m=+913.417001320" lastFinishedPulling="2026-01-27 08:06:49.836500381 +0000 UTC m=+915.488855883" observedRunningTime="2026-01-27 08:06:50.868802231 +0000 UTC m=+916.521157743" watchObservedRunningTime="2026-01-27 08:06:50.870135846 +0000 UTC m=+916.522491348" Jan 27 08:06:51 crc kubenswrapper[4787]: I0127 08:06:51.182229 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jlpfk"] Jan 27 08:06:51 crc kubenswrapper[4787]: I0127 08:06:51.785757 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cnfx4"] Jan 27 08:06:51 crc kubenswrapper[4787]: I0127 08:06:51.786800 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:06:51 crc kubenswrapper[4787]: I0127 08:06:51.802596 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cnfx4"] Jan 27 08:06:51 crc kubenswrapper[4787]: I0127 08:06:51.848541 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8pc4\" (UniqueName: \"kubernetes.io/projected/ba39dacd-d1da-4a2b-a4b9-fa0e917986f0-kube-api-access-b8pc4\") pod \"openstack-operator-index-cnfx4\" (UID: \"ba39dacd-d1da-4a2b-a4b9-fa0e917986f0\") " pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:06:51 crc kubenswrapper[4787]: I0127 08:06:51.949698 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8pc4\" (UniqueName: \"kubernetes.io/projected/ba39dacd-d1da-4a2b-a4b9-fa0e917986f0-kube-api-access-b8pc4\") pod \"openstack-operator-index-cnfx4\" (UID: \"ba39dacd-d1da-4a2b-a4b9-fa0e917986f0\") " pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:06:51 crc kubenswrapper[4787]: I0127 08:06:51.974530 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8pc4\" (UniqueName: \"kubernetes.io/projected/ba39dacd-d1da-4a2b-a4b9-fa0e917986f0-kube-api-access-b8pc4\") pod \"openstack-operator-index-cnfx4\" (UID: \"ba39dacd-d1da-4a2b-a4b9-fa0e917986f0\") " pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:06:52 crc kubenswrapper[4787]: I0127 08:06:52.113154 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:06:52 crc kubenswrapper[4787]: I0127 08:06:52.352223 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cnfx4"] Jan 27 08:06:52 crc kubenswrapper[4787]: I0127 08:06:52.869626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnfx4" event={"ID":"ba39dacd-d1da-4a2b-a4b9-fa0e917986f0","Type":"ContainerStarted","Data":"24b2ba488be9122dc0ac8416758dc8226f35009874f61b62bbecc3dbfdde372d"} Jan 27 08:06:52 crc kubenswrapper[4787]: I0127 08:06:52.869962 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnfx4" event={"ID":"ba39dacd-d1da-4a2b-a4b9-fa0e917986f0","Type":"ContainerStarted","Data":"189b45478376fa558633434a444ff60215cef664cdc5b40d3af9b5c540b3f261"} Jan 27 08:06:52 crc kubenswrapper[4787]: I0127 08:06:52.869774 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-jlpfk" podUID="c3c0535e-f30f-45de-9722-9431520772a2" containerName="registry-server" containerID="cri-o://2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93" gracePeriod=2 Jan 27 08:06:52 crc kubenswrapper[4787]: I0127 08:06:52.895491 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cnfx4" podStartSLOduration=1.838866024 podStartE2EDuration="1.895469774s" podCreationTimestamp="2026-01-27 08:06:51 +0000 UTC" firstStartedPulling="2026-01-27 08:06:52.382192254 +0000 UTC m=+918.034547746" lastFinishedPulling="2026-01-27 08:06:52.438795994 +0000 UTC m=+918.091151496" observedRunningTime="2026-01-27 08:06:52.893160342 +0000 UTC m=+918.545515864" watchObservedRunningTime="2026-01-27 08:06:52.895469774 +0000 UTC m=+918.547825266" Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.280435 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jlpfk" Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.377504 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv7q6\" (UniqueName: \"kubernetes.io/projected/c3c0535e-f30f-45de-9722-9431520772a2-kube-api-access-zv7q6\") pod \"c3c0535e-f30f-45de-9722-9431520772a2\" (UID: \"c3c0535e-f30f-45de-9722-9431520772a2\") " Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.385717 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c0535e-f30f-45de-9722-9431520772a2-kube-api-access-zv7q6" (OuterVolumeSpecName: "kube-api-access-zv7q6") pod "c3c0535e-f30f-45de-9722-9431520772a2" (UID: "c3c0535e-f30f-45de-9722-9431520772a2"). InnerVolumeSpecName "kube-api-access-zv7q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.479412 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv7q6\" (UniqueName: \"kubernetes.io/projected/c3c0535e-f30f-45de-9722-9431520772a2-kube-api-access-zv7q6\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.879250 4787 generic.go:334] "Generic (PLEG): container finished" podID="c3c0535e-f30f-45de-9722-9431520772a2" containerID="2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93" exitCode=0 Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.879357 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jlpfk" Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.879368 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jlpfk" event={"ID":"c3c0535e-f30f-45de-9722-9431520772a2","Type":"ContainerDied","Data":"2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93"} Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.879454 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jlpfk" event={"ID":"c3c0535e-f30f-45de-9722-9431520772a2","Type":"ContainerDied","Data":"9cf1698799f8600b465f5d25e75ac44f2338a58601c45a62f35b4c50e4a809ef"} Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.879485 4787 scope.go:117] "RemoveContainer" containerID="2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93" Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.901905 4787 scope.go:117] "RemoveContainer" containerID="2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93" Jan 27 08:06:53 crc kubenswrapper[4787]: E0127 08:06:53.902442 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93\": container with ID starting with 2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93 not found: ID does not exist" containerID="2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93" Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.902520 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93"} err="failed to get container status \"2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93\": rpc error: code = NotFound desc = could not find container \"2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93\": container with ID starting with 2ae36a8c3cc95644299c340187a4c23740eecc3771632126a61345b654a07a93 not found: ID does not exist" Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.918184 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jlpfk"] Jan 27 08:06:53 crc kubenswrapper[4787]: I0127 08:06:53.924487 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-jlpfk"] Jan 27 08:06:55 crc kubenswrapper[4787]: I0127 08:06:55.086103 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3c0535e-f30f-45de-9722-9431520772a2" path="/var/lib/kubelet/pods/c3c0535e-f30f-45de-9722-9431520772a2/volumes" Jan 27 08:06:56 crc kubenswrapper[4787]: I0127 08:06:56.497512 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:58 crc kubenswrapper[4787]: I0127 08:06:58.771532 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5nrrg"] Jan 27 08:06:58 crc kubenswrapper[4787]: I0127 08:06:58.772303 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5nrrg" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerName="registry-server" containerID="cri-o://4d80e93e40acb24a06e931eb2842625cf5854351ef02789c3d3a51d8155a8f6e" gracePeriod=2 Jan 27 08:06:58 crc kubenswrapper[4787]: I0127 08:06:58.922562 4787 generic.go:334] "Generic (PLEG): container finished" podID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerID="4d80e93e40acb24a06e931eb2842625cf5854351ef02789c3d3a51d8155a8f6e" exitCode=0 Jan 27 08:06:58 crc kubenswrapper[4787]: I0127 08:06:58.922623 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nrrg" event={"ID":"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c","Type":"ContainerDied","Data":"4d80e93e40acb24a06e931eb2842625cf5854351ef02789c3d3a51d8155a8f6e"} Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.206427 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.261791 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-utilities\") pod \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.261925 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb2tf\" (UniqueName: \"kubernetes.io/projected/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-kube-api-access-mb2tf\") pod \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.262443 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-catalog-content\") pod \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\" (UID: \"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c\") " Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.263038 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-utilities" (OuterVolumeSpecName: "utilities") pod "e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" (UID: "e78c8f6a-3cd2-4fbc-af78-3fbaba41070c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.270064 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-kube-api-access-mb2tf" (OuterVolumeSpecName: "kube-api-access-mb2tf") pod "e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" (UID: "e78c8f6a-3cd2-4fbc-af78-3fbaba41070c"). InnerVolumeSpecName "kube-api-access-mb2tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.312778 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" (UID: "e78c8f6a-3cd2-4fbc-af78-3fbaba41070c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.364410 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.364457 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.364471 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb2tf\" (UniqueName: \"kubernetes.io/projected/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c-kube-api-access-mb2tf\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.933168 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nrrg" event={"ID":"e78c8f6a-3cd2-4fbc-af78-3fbaba41070c","Type":"ContainerDied","Data":"53cd4e630003b6798c9e5e8039883a25323c89f632886d1ab420bebf4450bb8f"} Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.933266 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nrrg" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.933721 4787 scope.go:117] "RemoveContainer" containerID="4d80e93e40acb24a06e931eb2842625cf5854351ef02789c3d3a51d8155a8f6e" Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.966842 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5nrrg"] Jan 27 08:06:59 crc kubenswrapper[4787]: I0127 08:06:59.978525 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5nrrg"] Jan 27 08:07:00 crc kubenswrapper[4787]: I0127 08:07:00.125649 4787 scope.go:117] "RemoveContainer" containerID="08b4f62655ac0067d486c33926cf62c3c39cdc3fe5830d59c54b976d0643b999" Jan 27 08:07:00 crc kubenswrapper[4787]: I0127 08:07:00.143648 4787 scope.go:117] "RemoveContainer" containerID="b5e4cd5270b6e96f6cb6daa829d6d67b9d5b6ecd4d86e4f5b96e9b21d3b9dfd3" Jan 27 08:07:01 crc kubenswrapper[4787]: I0127 08:07:01.090021 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" path="/var/lib/kubelet/pods/e78c8f6a-3cd2-4fbc-af78-3fbaba41070c/volumes" Jan 27 08:07:02 crc kubenswrapper[4787]: I0127 08:07:02.113750 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:07:02 crc kubenswrapper[4787]: I0127 08:07:02.114860 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:07:02 crc kubenswrapper[4787]: I0127 08:07:02.157114 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:07:02 crc kubenswrapper[4787]: I0127 08:07:02.986523 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cnfx4" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.511729 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g"] Jan 27 08:07:10 crc kubenswrapper[4787]: E0127 08:07:10.512989 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerName="registry-server" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.513017 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerName="registry-server" Jan 27 08:07:10 crc kubenswrapper[4787]: E0127 08:07:10.513048 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3c0535e-f30f-45de-9722-9431520772a2" containerName="registry-server" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.513057 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c0535e-f30f-45de-9722-9431520772a2" containerName="registry-server" Jan 27 08:07:10 crc kubenswrapper[4787]: E0127 08:07:10.513071 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerName="extract-content" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.513080 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerName="extract-content" Jan 27 08:07:10 crc kubenswrapper[4787]: E0127 08:07:10.513097 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerName="extract-utilities" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.513105 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerName="extract-utilities" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.513295 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78c8f6a-3cd2-4fbc-af78-3fbaba41070c" containerName="registry-server" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.513339 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3c0535e-f30f-45de-9722-9431520772a2" containerName="registry-server" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.515309 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.518854 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7z9mk" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.522387 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g"] Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.557275 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2lq\" (UniqueName: \"kubernetes.io/projected/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-kube-api-access-rm2lq\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.557521 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-bundle\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.557667 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-util\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.659829 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2lq\" (UniqueName: \"kubernetes.io/projected/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-kube-api-access-rm2lq\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.659992 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-bundle\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.660054 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-util\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.661136 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-util\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.661141 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-bundle\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.685142 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2lq\" (UniqueName: \"kubernetes.io/projected/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-kube-api-access-rm2lq\") pod \"fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:10 crc kubenswrapper[4787]: I0127 08:07:10.836459 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:11 crc kubenswrapper[4787]: I0127 08:07:11.355878 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g"] Jan 27 08:07:12 crc kubenswrapper[4787]: I0127 08:07:12.020474 4787 generic.go:334] "Generic (PLEG): container finished" podID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerID="a9f1fd61431cb95f96b0c4b871f3a6bb8ea404beadec1c1139de236e15c36b11" exitCode=0 Jan 27 08:07:12 crc kubenswrapper[4787]: I0127 08:07:12.020618 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" event={"ID":"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1","Type":"ContainerDied","Data":"a9f1fd61431cb95f96b0c4b871f3a6bb8ea404beadec1c1139de236e15c36b11"} Jan 27 08:07:12 crc kubenswrapper[4787]: I0127 08:07:12.020708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" event={"ID":"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1","Type":"ContainerStarted","Data":"1db295cc6731d3f08dfb5af5f597d84207441910100091f0cea6351873c9986d"} Jan 27 08:07:13 crc kubenswrapper[4787]: I0127 08:07:13.053531 4787 generic.go:334] "Generic (PLEG): container finished" podID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerID="a4074cf2dabef95322eb20a835884d5b51098cda722a7c7f95a1971f131bf0a9" exitCode=0 Jan 27 08:07:13 crc kubenswrapper[4787]: I0127 08:07:13.054778 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" event={"ID":"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1","Type":"ContainerDied","Data":"a4074cf2dabef95322eb20a835884d5b51098cda722a7c7f95a1971f131bf0a9"} Jan 27 08:07:14 crc kubenswrapper[4787]: I0127 08:07:14.065491 4787 generic.go:334] "Generic (PLEG): container finished" podID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerID="9a028a61db959db2d5e788fbb0261eddaa148c4f87d38dfac8bb17301f06f374" exitCode=0 Jan 27 08:07:14 crc kubenswrapper[4787]: I0127 08:07:14.065615 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" event={"ID":"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1","Type":"ContainerDied","Data":"9a028a61db959db2d5e788fbb0261eddaa148c4f87d38dfac8bb17301f06f374"} Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.385911 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.432576 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-util\") pod \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.432679 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm2lq\" (UniqueName: \"kubernetes.io/projected/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-kube-api-access-rm2lq\") pod \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.432750 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-bundle\") pod \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\" (UID: \"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1\") " Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.433910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-bundle" (OuterVolumeSpecName: "bundle") pod "f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" (UID: "f63e7c4f-d194-4209-a8ac-f67c9dc7dde1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.441060 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-kube-api-access-rm2lq" (OuterVolumeSpecName: "kube-api-access-rm2lq") pod "f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" (UID: "f63e7c4f-d194-4209-a8ac-f67c9dc7dde1"). InnerVolumeSpecName "kube-api-access-rm2lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.452062 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-util" (OuterVolumeSpecName: "util") pod "f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" (UID: "f63e7c4f-d194-4209-a8ac-f67c9dc7dde1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.533984 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.534392 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm2lq\" (UniqueName: \"kubernetes.io/projected/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-kube-api-access-rm2lq\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:15 crc kubenswrapper[4787]: I0127 08:07:15.534467 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63e7c4f-d194-4209-a8ac-f67c9dc7dde1-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:16 crc kubenswrapper[4787]: I0127 08:07:16.080783 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" event={"ID":"f63e7c4f-d194-4209-a8ac-f67c9dc7dde1","Type":"ContainerDied","Data":"1db295cc6731d3f08dfb5af5f597d84207441910100091f0cea6351873c9986d"} Jan 27 08:07:16 crc kubenswrapper[4787]: I0127 08:07:16.080829 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db295cc6731d3f08dfb5af5f597d84207441910100091f0cea6351873c9986d" Jan 27 08:07:16 crc kubenswrapper[4787]: I0127 08:07:16.080886 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.355754 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq"] Jan 27 08:07:23 crc kubenswrapper[4787]: E0127 08:07:23.357401 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerName="util" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.357467 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerName="util" Jan 27 08:07:23 crc kubenswrapper[4787]: E0127 08:07:23.357505 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerName="pull" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.357512 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerName="pull" Jan 27 08:07:23 crc kubenswrapper[4787]: E0127 08:07:23.357527 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerName="extract" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.357536 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerName="extract" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.357862 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63e7c4f-d194-4209-a8ac-f67c9dc7dde1" containerName="extract" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.360540 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.364966 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq"] Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.366378 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-7jtsz" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.369037 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftt5\" (UniqueName: \"kubernetes.io/projected/86e4bb51-0d54-49f8-b158-8338d845d92e-kube-api-access-lftt5\") pod \"openstack-operator-controller-init-59cc4f5964-ch4bq\" (UID: \"86e4bb51-0d54-49f8-b158-8338d845d92e\") " pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.470985 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftt5\" (UniqueName: \"kubernetes.io/projected/86e4bb51-0d54-49f8-b158-8338d845d92e-kube-api-access-lftt5\") pod \"openstack-operator-controller-init-59cc4f5964-ch4bq\" (UID: \"86e4bb51-0d54-49f8-b158-8338d845d92e\") " pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.492044 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftt5\" (UniqueName: \"kubernetes.io/projected/86e4bb51-0d54-49f8-b158-8338d845d92e-kube-api-access-lftt5\") pod \"openstack-operator-controller-init-59cc4f5964-ch4bq\" (UID: \"86e4bb51-0d54-49f8-b158-8338d845d92e\") " pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:07:23 crc kubenswrapper[4787]: I0127 08:07:23.687503 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:07:24 crc kubenswrapper[4787]: I0127 08:07:24.192260 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq"] Jan 27 08:07:25 crc kubenswrapper[4787]: I0127 08:07:25.147895 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" event={"ID":"86e4bb51-0d54-49f8-b158-8338d845d92e","Type":"ContainerStarted","Data":"0a86cb2179439de11cc5dfa2c7d576a7b85ad83b8b80b1dd922ae9e6f93584cf"} Jan 27 08:07:29 crc kubenswrapper[4787]: I0127 08:07:29.178925 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" event={"ID":"86e4bb51-0d54-49f8-b158-8338d845d92e","Type":"ContainerStarted","Data":"5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f"} Jan 27 08:07:29 crc kubenswrapper[4787]: I0127 08:07:29.179325 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:07:29 crc kubenswrapper[4787]: I0127 08:07:29.213726 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" podStartSLOduration=2.115656354 podStartE2EDuration="6.213700132s" podCreationTimestamp="2026-01-27 08:07:23 +0000 UTC" firstStartedPulling="2026-01-27 08:07:24.217443135 +0000 UTC m=+949.869798617" lastFinishedPulling="2026-01-27 08:07:28.315486903 +0000 UTC m=+953.967842395" observedRunningTime="2026-01-27 08:07:29.207870834 +0000 UTC m=+954.860226336" watchObservedRunningTime="2026-01-27 08:07:29.213700132 +0000 UTC m=+954.866055624" Jan 27 08:07:33 crc kubenswrapper[4787]: I0127 08:07:33.691914 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.102522 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.103976 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.106343 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fqkfl" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.115204 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.116276 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.133300 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6tq26" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.139113 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.154392 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.175639 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.176707 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.182472 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-88lqz" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.196284 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.197614 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.202907 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2w96c" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.213194 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.219386 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.222789 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.224114 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.228631 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tj4rr" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.231164 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgz4\" (UniqueName: \"kubernetes.io/projected/85bafad5-30f3-4931-bd2a-0d45e2b0f844-kube-api-access-bfgz4\") pod \"cinder-operator-controller-manager-655bf9cfbb-6tnpg\" (UID: \"85bafad5-30f3-4931-bd2a-0d45e2b0f844\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.231222 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7bm\" (UniqueName: \"kubernetes.io/projected/4b5515cb-f539-4a9b-8c46-36c5c62c5c93-kube-api-access-bk7bm\") pod \"barbican-operator-controller-manager-65ff799cfd-d5r5z\" (UID: \"4b5515cb-f539-4a9b-8c46-36c5c62c5c93\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.231748 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.233124 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.235194 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jdq8b" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.246652 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.266064 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.275857 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.276963 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.281027 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4j2xf" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.281257 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.297694 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.298763 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.305383 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.309574 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qpx2f" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.310592 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.311563 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.314311 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-44fx6" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.323689 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.380374 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400077 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzws7\" (UniqueName: \"kubernetes.io/projected/32019c4d-61e3-4c27-86d4-3a79bb40ce70-kube-api-access-jzws7\") pod \"keystone-operator-controller-manager-55f684fd56-2k8pd\" (UID: \"32019c4d-61e3-4c27-86d4-3a79bb40ce70\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400140 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gp8m\" (UniqueName: \"kubernetes.io/projected/dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65-kube-api-access-5gp8m\") pod \"designate-operator-controller-manager-77554cdc5c-627g6\" (UID: \"dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400207 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb46d\" (UniqueName: \"kubernetes.io/projected/a21cc0b9-75d2-4ddb-925e-23eeaac2dd35-kube-api-access-jb46d\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkmnm\" (UID: \"a21cc0b9-75d2-4ddb-925e-23eeaac2dd35\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400260 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsqjb\" (UniqueName: \"kubernetes.io/projected/be4be3e5-1589-4e84-9d35-f104bc3d5ad4-kube-api-access-wsqjb\") pod \"glance-operator-controller-manager-67dd55ff59-kk6mj\" (UID: \"be4be3e5-1589-4e84-9d35-f104bc3d5ad4\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400279 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z7sm\" (UniqueName: \"kubernetes.io/projected/ff52e5a2-2aab-451a-8869-72c1a506940a-kube-api-access-6z7sm\") pod \"ironic-operator-controller-manager-768b776ffb-j7klk\" (UID: \"ff52e5a2-2aab-451a-8869-72c1a506940a\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400299 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csw77\" (UniqueName: \"kubernetes.io/projected/c634491f-6069-472d-bff7-d8903d0afa1d-kube-api-access-csw77\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400342 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfgz4\" (UniqueName: \"kubernetes.io/projected/85bafad5-30f3-4931-bd2a-0d45e2b0f844-kube-api-access-bfgz4\") pod \"cinder-operator-controller-manager-655bf9cfbb-6tnpg\" (UID: \"85bafad5-30f3-4931-bd2a-0d45e2b0f844\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400371 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv4mh\" (UniqueName: \"kubernetes.io/projected/f621b9f3-ead9-4fab-b33f-f9d6179e8f3f-kube-api-access-qv4mh\") pod \"heat-operator-controller-manager-575ffb885b-57zhq\" (UID: \"f621b9f3-ead9-4fab-b33f-f9d6179e8f3f\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400399 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk7bm\" (UniqueName: \"kubernetes.io/projected/4b5515cb-f539-4a9b-8c46-36c5c62c5c93-kube-api-access-bk7bm\") pod \"barbican-operator-controller-manager-65ff799cfd-d5r5z\" (UID: \"4b5515cb-f539-4a9b-8c46-36c5c62c5c93\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.400436 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.428897 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.430581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.435687 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lc2ft" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.437441 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk7bm\" (UniqueName: \"kubernetes.io/projected/4b5515cb-f539-4a9b-8c46-36c5c62c5c93-kube-api-access-bk7bm\") pod \"barbican-operator-controller-manager-65ff799cfd-d5r5z\" (UID: \"4b5515cb-f539-4a9b-8c46-36c5c62c5c93\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.441019 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfgz4\" (UniqueName: \"kubernetes.io/projected/85bafad5-30f3-4931-bd2a-0d45e2b0f844-kube-api-access-bfgz4\") pod \"cinder-operator-controller-manager-655bf9cfbb-6tnpg\" (UID: \"85bafad5-30f3-4931-bd2a-0d45e2b0f844\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.444129 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.445297 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.448810 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.455212 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.456716 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.460128 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.474714 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.485537 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.489945 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kmxgn" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.490270 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cdz4s" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.499891 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.501197 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.507528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzws7\" (UniqueName: \"kubernetes.io/projected/32019c4d-61e3-4c27-86d4-3a79bb40ce70-kube-api-access-jzws7\") pod \"keystone-operator-controller-manager-55f684fd56-2k8pd\" (UID: \"32019c4d-61e3-4c27-86d4-3a79bb40ce70\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.507583 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gp8m\" (UniqueName: \"kubernetes.io/projected/dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65-kube-api-access-5gp8m\") pod \"designate-operator-controller-manager-77554cdc5c-627g6\" (UID: \"dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.508600 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-d9cbf" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.508746 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb46d\" (UniqueName: \"kubernetes.io/projected/a21cc0b9-75d2-4ddb-925e-23eeaac2dd35-kube-api-access-jb46d\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkmnm\" (UID: \"a21cc0b9-75d2-4ddb-925e-23eeaac2dd35\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.508846 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsqjb\" (UniqueName: \"kubernetes.io/projected/be4be3e5-1589-4e84-9d35-f104bc3d5ad4-kube-api-access-wsqjb\") pod \"glance-operator-controller-manager-67dd55ff59-kk6mj\" (UID: \"be4be3e5-1589-4e84-9d35-f104bc3d5ad4\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.508888 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z7sm\" (UniqueName: \"kubernetes.io/projected/ff52e5a2-2aab-451a-8869-72c1a506940a-kube-api-access-6z7sm\") pod \"ironic-operator-controller-manager-768b776ffb-j7klk\" (UID: \"ff52e5a2-2aab-451a-8869-72c1a506940a\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.508916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csw77\" (UniqueName: \"kubernetes.io/projected/c634491f-6069-472d-bff7-d8903d0afa1d-kube-api-access-csw77\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.508961 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv4mh\" (UniqueName: \"kubernetes.io/projected/f621b9f3-ead9-4fab-b33f-f9d6179e8f3f-kube-api-access-qv4mh\") pod \"heat-operator-controller-manager-575ffb885b-57zhq\" (UID: \"f621b9f3-ead9-4fab-b33f-f9d6179e8f3f\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.509020 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:52 crc kubenswrapper[4787]: E0127 08:07:52.509162 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:07:52 crc kubenswrapper[4787]: E0127 08:07:52.509237 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert podName:c634491f-6069-472d-bff7-d8903d0afa1d nodeName:}" failed. No retries permitted until 2026-01-27 08:07:53.009210412 +0000 UTC m=+978.661565904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert") pod "infra-operator-controller-manager-7d75bc88d5-drtqw" (UID: "c634491f-6069-472d-bff7-d8903d0afa1d") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.523674 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.524719 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.555537 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-z9xlm" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.568201 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.573806 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsqjb\" (UniqueName: \"kubernetes.io/projected/be4be3e5-1589-4e84-9d35-f104bc3d5ad4-kube-api-access-wsqjb\") pod \"glance-operator-controller-manager-67dd55ff59-kk6mj\" (UID: \"be4be3e5-1589-4e84-9d35-f104bc3d5ad4\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.574254 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzws7\" (UniqueName: \"kubernetes.io/projected/32019c4d-61e3-4c27-86d4-3a79bb40ce70-kube-api-access-jzws7\") pod \"keystone-operator-controller-manager-55f684fd56-2k8pd\" (UID: \"32019c4d-61e3-4c27-86d4-3a79bb40ce70\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.585325 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv4mh\" (UniqueName: \"kubernetes.io/projected/f621b9f3-ead9-4fab-b33f-f9d6179e8f3f-kube-api-access-qv4mh\") pod \"heat-operator-controller-manager-575ffb885b-57zhq\" (UID: \"f621b9f3-ead9-4fab-b33f-f9d6179e8f3f\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.585329 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z7sm\" (UniqueName: \"kubernetes.io/projected/ff52e5a2-2aab-451a-8869-72c1a506940a-kube-api-access-6z7sm\") pod \"ironic-operator-controller-manager-768b776ffb-j7klk\" (UID: \"ff52e5a2-2aab-451a-8869-72c1a506940a\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.585994 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gp8m\" (UniqueName: \"kubernetes.io/projected/dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65-kube-api-access-5gp8m\") pod \"designate-operator-controller-manager-77554cdc5c-627g6\" (UID: \"dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.587291 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb46d\" (UniqueName: \"kubernetes.io/projected/a21cc0b9-75d2-4ddb-925e-23eeaac2dd35-kube-api-access-jb46d\") pod \"horizon-operator-controller-manager-77d5c5b54f-vkmnm\" (UID: \"a21cc0b9-75d2-4ddb-925e-23eeaac2dd35\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.594083 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csw77\" (UniqueName: \"kubernetes.io/projected/c634491f-6069-472d-bff7-d8903d0afa1d-kube-api-access-csw77\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.597972 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.599047 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.606497 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.606520 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wrq8q" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.609985 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8r9f\" (UniqueName: \"kubernetes.io/projected/be114c8e-3aa0-41b4-9954-d07c800d3cfc-kube-api-access-z8r9f\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vk5vt\" (UID: \"be114c8e-3aa0-41b4-9954-d07c800d3cfc\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.610111 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vdjb\" (UniqueName: \"kubernetes.io/projected/3ad17a28-2cb1-4455-8d38-58919a287ae6-kube-api-access-2vdjb\") pod \"nova-operator-controller-manager-57d6b69d8b-s8kvs\" (UID: \"3ad17a28-2cb1-4455-8d38-58919a287ae6\") " pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.610150 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmbfj\" (UniqueName: \"kubernetes.io/projected/e59e94f5-9033-488d-a186-476cb6cbb3f0-kube-api-access-zmbfj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-88gn4\" (UID: \"e59e94f5-9033-488d-a186-476cb6cbb3f0\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.610194 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5xl\" (UniqueName: \"kubernetes.io/projected/59efc0ff-5727-48f9-91b7-36533ba5f94a-kube-api-access-cx5xl\") pod \"manila-operator-controller-manager-849fcfbb6b-tbr5m\" (UID: \"59efc0ff-5727-48f9-91b7-36533ba5f94a\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.654340 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.655729 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.658441 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-sgrt9" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.680622 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.681106 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.681813 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.684390 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jqrcp" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.689810 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.711025 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.725844 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.726437 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vdjb\" (UniqueName: \"kubernetes.io/projected/3ad17a28-2cb1-4455-8d38-58919a287ae6-kube-api-access-2vdjb\") pod \"nova-operator-controller-manager-57d6b69d8b-s8kvs\" (UID: \"3ad17a28-2cb1-4455-8d38-58919a287ae6\") " pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.727007 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmbfj\" (UniqueName: \"kubernetes.io/projected/e59e94f5-9033-488d-a186-476cb6cbb3f0-kube-api-access-zmbfj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-88gn4\" (UID: \"e59e94f5-9033-488d-a186-476cb6cbb3f0\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.727054 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvs8\" (UniqueName: \"kubernetes.io/projected/601fe40b-5553-4225-b5bd-214428d6fa68-kube-api-access-fbvs8\") pod \"octavia-operator-controller-manager-7875d7675-x4nz5\" (UID: \"601fe40b-5553-4225-b5bd-214428d6fa68\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.727086 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5xl\" (UniqueName: \"kubernetes.io/projected/59efc0ff-5727-48f9-91b7-36533ba5f94a-kube-api-access-cx5xl\") pod \"manila-operator-controller-manager-849fcfbb6b-tbr5m\" (UID: \"59efc0ff-5727-48f9-91b7-36533ba5f94a\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.727135 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8r9f\" (UniqueName: \"kubernetes.io/projected/be114c8e-3aa0-41b4-9954-d07c800d3cfc-kube-api-access-z8r9f\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vk5vt\" (UID: \"be114c8e-3aa0-41b4-9954-d07c800d3cfc\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.727166 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9jt\" (UniqueName: \"kubernetes.io/projected/116b0532-a8d5-47ec-8e12-e7ef482094d6-kube-api-access-fz9jt\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.727225 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.728008 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.739685 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.740538 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.747524 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.749043 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-lvmhv" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.770658 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.770997 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vdjb\" (UniqueName: \"kubernetes.io/projected/3ad17a28-2cb1-4455-8d38-58919a287ae6-kube-api-access-2vdjb\") pod \"nova-operator-controller-manager-57d6b69d8b-s8kvs\" (UID: \"3ad17a28-2cb1-4455-8d38-58919a287ae6\") " pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.773988 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.782846 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.784440 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8r9f\" (UniqueName: \"kubernetes.io/projected/be114c8e-3aa0-41b4-9954-d07c800d3cfc-kube-api-access-z8r9f\") pod \"neutron-operator-controller-manager-7ffd8d76d4-vk5vt\" (UID: \"be114c8e-3aa0-41b4-9954-d07c800d3cfc\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.794048 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.805402 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.808310 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmbfj\" (UniqueName: \"kubernetes.io/projected/e59e94f5-9033-488d-a186-476cb6cbb3f0-kube-api-access-zmbfj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-88gn4\" (UID: \"e59e94f5-9033-488d-a186-476cb6cbb3f0\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.808334 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.815309 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5xl\" (UniqueName: \"kubernetes.io/projected/59efc0ff-5727-48f9-91b7-36533ba5f94a-kube-api-access-cx5xl\") pod \"manila-operator-controller-manager-849fcfbb6b-tbr5m\" (UID: \"59efc0ff-5727-48f9-91b7-36533ba5f94a\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.816173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7jhlh" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.828359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqndc\" (UniqueName: \"kubernetes.io/projected/a552e9e7-f9fa-4b71-9ec6-848a2230279f-kube-api-access-fqndc\") pod \"placement-operator-controller-manager-79d5ccc684-qhnz4\" (UID: \"a552e9e7-f9fa-4b71-9ec6-848a2230279f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.828444 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz9jt\" (UniqueName: \"kubernetes.io/projected/116b0532-a8d5-47ec-8e12-e7ef482094d6-kube-api-access-fz9jt\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.828501 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.828542 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgtm\" (UniqueName: \"kubernetes.io/projected/7cae616e-6aa8-405f-b10a-2a5346fae5b4-kube-api-access-njgtm\") pod \"ovn-operator-controller-manager-6f75f45d54-l9mc7\" (UID: \"7cae616e-6aa8-405f-b10a-2a5346fae5b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.828577 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbvs8\" (UniqueName: \"kubernetes.io/projected/601fe40b-5553-4225-b5bd-214428d6fa68-kube-api-access-fbvs8\") pod \"octavia-operator-controller-manager-7875d7675-x4nz5\" (UID: \"601fe40b-5553-4225-b5bd-214428d6fa68\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.829047 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" Jan 27 08:07:52 crc kubenswrapper[4787]: E0127 08:07:52.829746 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:07:52 crc kubenswrapper[4787]: E0127 08:07:52.829860 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert podName:116b0532-a8d5-47ec-8e12-e7ef482094d6 nodeName:}" failed. No retries permitted until 2026-01-27 08:07:53.329837771 +0000 UTC m=+978.982193263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" (UID: "116b0532-a8d5-47ec-8e12-e7ef482094d6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.858647 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.869381 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.871951 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.872439 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.875799 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz9jt\" (UniqueName: \"kubernetes.io/projected/116b0532-a8d5-47ec-8e12-e7ef482094d6-kube-api-access-fz9jt\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.892862 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbvs8\" (UniqueName: \"kubernetes.io/projected/601fe40b-5553-4225-b5bd-214428d6fa68-kube-api-access-fbvs8\") pod \"octavia-operator-controller-manager-7875d7675-x4nz5\" (UID: \"601fe40b-5553-4225-b5bd-214428d6fa68\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.929469 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgtm\" (UniqueName: \"kubernetes.io/projected/7cae616e-6aa8-405f-b10a-2a5346fae5b4-kube-api-access-njgtm\") pod \"ovn-operator-controller-manager-6f75f45d54-l9mc7\" (UID: \"7cae616e-6aa8-405f-b10a-2a5346fae5b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.929574 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqndc\" (UniqueName: \"kubernetes.io/projected/a552e9e7-f9fa-4b71-9ec6-848a2230279f-kube-api-access-fqndc\") pod \"placement-operator-controller-manager-79d5ccc684-qhnz4\" (UID: \"a552e9e7-f9fa-4b71-9ec6-848a2230279f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.929612 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksbq4\" (UniqueName: \"kubernetes.io/projected/89d7a7e7-443a-486f-b8b7-a024082b9fba-kube-api-access-ksbq4\") pod \"swift-operator-controller-manager-547cbdb99f-pxbjw\" (UID: \"89d7a7e7-443a-486f-b8b7-a024082b9fba\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.929676 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjsf\" (UniqueName: \"kubernetes.io/projected/7986c3dd-6d06-4892-9fed-1b396669685b-kube-api-access-wrjsf\") pod \"telemetry-operator-controller-manager-799bc87c89-cqpjk\" (UID: \"7986c3dd-6d06-4892-9fed-1b396669685b\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.941122 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj"] Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.952848 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.954198 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" Jan 27 08:07:52 crc kubenswrapper[4787]: I0127 08:07:52.975298 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-678l7" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.002301 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqndc\" (UniqueName: \"kubernetes.io/projected/a552e9e7-f9fa-4b71-9ec6-848a2230279f-kube-api-access-fqndc\") pod \"placement-operator-controller-manager-79d5ccc684-qhnz4\" (UID: \"a552e9e7-f9fa-4b71-9ec6-848a2230279f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.002429 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgtm\" (UniqueName: \"kubernetes.io/projected/7cae616e-6aa8-405f-b10a-2a5346fae5b4-kube-api-access-njgtm\") pod \"ovn-operator-controller-manager-6f75f45d54-l9mc7\" (UID: \"7cae616e-6aa8-405f-b10a-2a5346fae5b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.010235 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.022414 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.024075 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.039168 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksbq4\" (UniqueName: \"kubernetes.io/projected/89d7a7e7-443a-486f-b8b7-a024082b9fba-kube-api-access-ksbq4\") pod \"swift-operator-controller-manager-547cbdb99f-pxbjw\" (UID: \"89d7a7e7-443a-486f-b8b7-a024082b9fba\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.039304 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjsf\" (UniqueName: \"kubernetes.io/projected/7986c3dd-6d06-4892-9fed-1b396669685b-kube-api-access-wrjsf\") pod \"telemetry-operator-controller-manager-799bc87c89-cqpjk\" (UID: \"7986c3dd-6d06-4892-9fed-1b396669685b\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.039360 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.039620 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.039714 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert podName:c634491f-6069-472d-bff7-d8903d0afa1d nodeName:}" failed. No retries permitted until 2026-01-27 08:07:54.039686385 +0000 UTC m=+979.692041877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert") pod "infra-operator-controller-manager-7d75bc88d5-drtqw" (UID: "c634491f-6069-472d-bff7-d8903d0afa1d") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.068153 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksbq4\" (UniqueName: \"kubernetes.io/projected/89d7a7e7-443a-486f-b8b7-a024082b9fba-kube-api-access-ksbq4\") pod \"swift-operator-controller-manager-547cbdb99f-pxbjw\" (UID: \"89d7a7e7-443a-486f-b8b7-a024082b9fba\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.071933 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjsf\" (UniqueName: \"kubernetes.io/projected/7986c3dd-6d06-4892-9fed-1b396669685b-kube-api-access-wrjsf\") pod \"telemetry-operator-controller-manager-799bc87c89-cqpjk\" (UID: \"7986c3dd-6d06-4892-9fed-1b396669685b\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.075198 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.083827 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.086081 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.112450 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7jstv" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.142067 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkf4m\" (UniqueName: \"kubernetes.io/projected/7ec92388-f799-43cc-9235-6d4b717bc98e-kube-api-access-hkf4m\") pod \"test-operator-controller-manager-69797bbcbd-dnxpj\" (UID: \"7ec92388-f799-43cc-9235-6d4b717bc98e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.142489 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.153050 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.182242 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.200426 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.200491 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.201839 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.201932 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.208160 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6z6nd" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.209866 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.210150 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.210485 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qvx9b" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.212313 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.241048 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.247029 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkf4m\" (UniqueName: \"kubernetes.io/projected/7ec92388-f799-43cc-9235-6d4b717bc98e-kube-api-access-hkf4m\") pod \"test-operator-controller-manager-69797bbcbd-dnxpj\" (UID: \"7ec92388-f799-43cc-9235-6d4b717bc98e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.247702 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgs22\" (UniqueName: \"kubernetes.io/projected/292c617a-b64f-4b53-b05e-360769308e43-kube-api-access-vgs22\") pod \"watcher-operator-controller-manager-75db85654f-8cmb9\" (UID: \"292c617a-b64f-4b53-b05e-360769308e43\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.296215 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkf4m\" (UniqueName: \"kubernetes.io/projected/7ec92388-f799-43cc-9235-6d4b717bc98e-kube-api-access-hkf4m\") pod \"test-operator-controller-manager-69797bbcbd-dnxpj\" (UID: \"7ec92388-f799-43cc-9235-6d4b717bc98e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.323017 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.352446 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgs22\" (UniqueName: \"kubernetes.io/projected/292c617a-b64f-4b53-b05e-360769308e43-kube-api-access-vgs22\") pod \"watcher-operator-controller-manager-75db85654f-8cmb9\" (UID: \"292c617a-b64f-4b53-b05e-360769308e43\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.352525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.352593 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.352654 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvwxw\" (UniqueName: \"kubernetes.io/projected/7a0ecc5c-eebe-457b-9a72-804226878c7f-kube-api-access-rvwxw\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.352719 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.352739 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7r9d\" (UniqueName: \"kubernetes.io/projected/2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7-kube-api-access-g7r9d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5lxsh\" (UID: \"2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.352855 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.352963 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert podName:116b0532-a8d5-47ec-8e12-e7ef482094d6 nodeName:}" failed. No retries permitted until 2026-01-27 08:07:54.352938514 +0000 UTC m=+980.005293996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" (UID: "116b0532-a8d5-47ec-8e12-e7ef482094d6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.357392 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.384342 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgs22\" (UniqueName: \"kubernetes.io/projected/292c617a-b64f-4b53-b05e-360769308e43-kube-api-access-vgs22\") pod \"watcher-operator-controller-manager-75db85654f-8cmb9\" (UID: \"292c617a-b64f-4b53-b05e-360769308e43\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.435514 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.458780 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvwxw\" (UniqueName: \"kubernetes.io/projected/7a0ecc5c-eebe-457b-9a72-804226878c7f-kube-api-access-rvwxw\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.458935 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.459002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7r9d\" (UniqueName: \"kubernetes.io/projected/2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7-kube-api-access-g7r9d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5lxsh\" (UID: \"2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.459102 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.459394 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.459652 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:07:53.959629529 +0000 UTC m=+979.611985021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "metrics-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.461096 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.461187 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:07:53.961164252 +0000 UTC m=+979.613519744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "webhook-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.462838 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.477697 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z"] Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.497529 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.498465 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvwxw\" (UniqueName: \"kubernetes.io/projected/7a0ecc5c-eebe-457b-9a72-804226878c7f-kube-api-access-rvwxw\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.500145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7r9d\" (UniqueName: \"kubernetes.io/projected/2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7-kube-api-access-g7r9d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5lxsh\" (UID: \"2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.569101 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.607713 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk"] Jan 27 08:07:53 crc kubenswrapper[4787]: W0127 08:07:53.669068 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff52e5a2_2aab_451a_8869_72c1a506940a.slice/crio-032da21e32681cb8b3f71da507d5b318acc4a91170de4f0fb91682730607f472 WatchSource:0}: Error finding container 032da21e32681cb8b3f71da507d5b318acc4a91170de4f0fb91682730607f472: Status 404 returned error can't find the container with id 032da21e32681cb8b3f71da507d5b318acc4a91170de4f0fb91682730607f472 Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.974722 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.974999 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.975143 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.975280 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:07:54.975247238 +0000 UTC m=+980.627602890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "webhook-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.975281 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: I0127 08:07:53.975328 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6"] Jan 27 08:07:53 crc kubenswrapper[4787]: E0127 08:07:53.975357 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:07:54.975339581 +0000 UTC m=+980.627695073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "metrics-server-cert" not found Jan 27 08:07:53 crc kubenswrapper[4787]: W0127 08:07:53.979571 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd8dbc24_8fb8_4ae2_96c3_b7112c7d7c65.slice/crio-eca53bd6055d939eecd9fca096d39ca6f8a7901664fa791bcc77bc01443ec10d WatchSource:0}: Error finding container eca53bd6055d939eecd9fca096d39ca6f8a7901664fa791bcc77bc01443ec10d: Status 404 returned error can't find the container with id eca53bd6055d939eecd9fca096d39ca6f8a7901664fa791bcc77bc01443ec10d Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.076766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.077438 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.077562 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert podName:c634491f-6069-472d-bff7-d8903d0afa1d nodeName:}" failed. No retries permitted until 2026-01-27 08:07:56.077522319 +0000 UTC m=+981.729877811 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert") pod "infra-operator-controller-manager-7d75bc88d5-drtqw" (UID: "c634491f-6069-472d-bff7-d8903d0afa1d") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.162679 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs"] Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.188015 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m"] Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.199330 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj"] Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.213574 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4"] Jan 27 08:07:54 crc kubenswrapper[4787]: W0127 08:07:54.234310 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode59e94f5_9033_488d_a186_476cb6cbb3f0.slice/crio-163193571d08059017947cd7cf8fd0d1cb32cef8703ab7dcaeb079aaf4905a0e WatchSource:0}: Error finding container 163193571d08059017947cd7cf8fd0d1cb32cef8703ab7dcaeb079aaf4905a0e: Status 404 returned error can't find the container with id 163193571d08059017947cd7cf8fd0d1cb32cef8703ab7dcaeb079aaf4905a0e Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.383802 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.384024 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.384145 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert podName:116b0532-a8d5-47ec-8e12-e7ef482094d6 nodeName:}" failed. No retries permitted until 2026-01-27 08:07:56.38410064 +0000 UTC m=+982.036456132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" (UID: "116b0532-a8d5-47ec-8e12-e7ef482094d6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.405351 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk"] Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.409722 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq"] Jan 27 08:07:54 crc kubenswrapper[4787]: W0127 08:07:54.418271 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf621b9f3_ead9_4fab_b33f_f9d6179e8f3f.slice/crio-330c6e5a690fb8746a29b9b9a618e088182e74f96ff95aba590f091110934895 WatchSource:0}: Error finding container 330c6e5a690fb8746a29b9b9a618e088182e74f96ff95aba590f091110934895: Status 404 returned error can't find the container with id 330c6e5a690fb8746a29b9b9a618e088182e74f96ff95aba590f091110934895 Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.419204 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4"] Jan 27 08:07:54 crc kubenswrapper[4787]: W0127 08:07:54.427062 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda552e9e7_f9fa_4b71_9ec6_848a2230279f.slice/crio-1b5f31b17f22d68dcd85e84abeeb681ebb8be676669effbdd506fe2c2680cb3b WatchSource:0}: Error finding container 1b5f31b17f22d68dcd85e84abeeb681ebb8be676669effbdd506fe2c2680cb3b: Status 404 returned error can't find the container with id 1b5f31b17f22d68dcd85e84abeeb681ebb8be676669effbdd506fe2c2680cb3b Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.427572 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw"] Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.440137 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" event={"ID":"85bafad5-30f3-4931-bd2a-0d45e2b0f844","Type":"ContainerStarted","Data":"3f3872a2defe40e27c44947b8f9e9b0563804154fae31f371a3f18711ac435c9"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.444626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" event={"ID":"dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65","Type":"ContainerStarted","Data":"eca53bd6055d939eecd9fca096d39ca6f8a7901664fa791bcc77bc01443ec10d"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.448758 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" event={"ID":"ff52e5a2-2aab-451a-8869-72c1a506940a","Type":"ContainerStarted","Data":"032da21e32681cb8b3f71da507d5b318acc4a91170de4f0fb91682730607f472"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.454012 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" event={"ID":"59efc0ff-5727-48f9-91b7-36533ba5f94a","Type":"ContainerStarted","Data":"bcf91c200a19a125bd96b260351e3ca624f79e52d3faf95dd4a2a742e6300fad"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.457319 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" event={"ID":"4b5515cb-f539-4a9b-8c46-36c5c62c5c93","Type":"ContainerStarted","Data":"b9f2fd7edf94d8532f5bae448ea78f10923a0258cb6f52813337ff00de222430"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.458835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" event={"ID":"be4be3e5-1589-4e84-9d35-f104bc3d5ad4","Type":"ContainerStarted","Data":"9c7808b0a996e5490e2f610d6bbcb3a413add6538ba4fbbaf60340489e0bae8d"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.461564 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" event={"ID":"32019c4d-61e3-4c27-86d4-3a79bb40ce70","Type":"ContainerStarted","Data":"2d31230a0a208ed4a18bd1d643f883a599c2806dcbaf94afd08235b18e6894c4"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.463277 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" event={"ID":"3ad17a28-2cb1-4455-8d38-58919a287ae6","Type":"ContainerStarted","Data":"2b4b23f00d60ad642771edd29b4d11331ba1889dbee6abb9a2f10d40a9233501"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.464411 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" event={"ID":"e59e94f5-9033-488d-a186-476cb6cbb3f0","Type":"ContainerStarted","Data":"163193571d08059017947cd7cf8fd0d1cb32cef8703ab7dcaeb079aaf4905a0e"} Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.596816 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt"] Jan 27 08:07:54 crc kubenswrapper[4787]: W0127 08:07:54.626128 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe114c8e_3aa0_41b4_9954_d07c800d3cfc.slice/crio-83607832ac2a45723a9201cf6c2ad1cfc4d676eff390ca75c8de7ad98f54f282 WatchSource:0}: Error finding container 83607832ac2a45723a9201cf6c2ad1cfc4d676eff390ca75c8de7ad98f54f282: Status 404 returned error can't find the container with id 83607832ac2a45723a9201cf6c2ad1cfc4d676eff390ca75c8de7ad98f54f282 Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.658213 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj"] Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.681434 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7"] Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.691057 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbvs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7875d7675-x4nz5_openstack-operators(601fe40b-5553-4225-b5bd-214428d6fa68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.692202 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" podUID="601fe40b-5553-4225-b5bd-214428d6fa68" Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.705940 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5"] Jan 27 08:07:54 crc kubenswrapper[4787]: W0127 08:07:54.709232 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod292c617a_b64f_4b53_b05e_360769308e43.slice/crio-fa0cc93f7c0c80a2c1c3d97c946f9567e38bf5acb54637c053b4ca79e9966972 WatchSource:0}: Error finding container fa0cc93f7c0c80a2c1c3d97c946f9567e38bf5acb54637c053b4ca79e9966972: Status 404 returned error can't find the container with id fa0cc93f7c0c80a2c1c3d97c946f9567e38bf5acb54637c053b4ca79e9966972 Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.723119 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgs22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75db85654f-8cmb9_openstack-operators(292c617a-b64f-4b53-b05e-360769308e43): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.733208 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" podUID="292c617a-b64f-4b53-b05e-360769308e43" Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.737457 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh"] Jan 27 08:07:54 crc kubenswrapper[4787]: W0127 08:07:54.743423 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cae616e_6aa8_405f_b10a_2a5346fae5b4.slice/crio-107eab0acfdcd94b74305a50569753b82cfa639bb204a4f0f8ade19977c7f5c1 WatchSource:0}: Error finding container 107eab0acfdcd94b74305a50569753b82cfa639bb204a4f0f8ade19977c7f5c1: Status 404 returned error can't find the container with id 107eab0acfdcd94b74305a50569753b82cfa639bb204a4f0f8ade19977c7f5c1 Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.748127 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm"] Jan 27 08:07:54 crc kubenswrapper[4787]: I0127 08:07:54.755464 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9"] Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.761819 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jb46d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-vkmnm_openstack-operators(a21cc0b9-75d2-4ddb-925e-23eeaac2dd35): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.763919 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" podUID="a21cc0b9-75d2-4ddb-925e-23eeaac2dd35" Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.768282 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-njgtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-l9mc7_openstack-operators(7cae616e-6aa8-405f-b10a-2a5346fae5b4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:07:54 crc kubenswrapper[4787]: E0127 08:07:54.769821 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" podUID="7cae616e-6aa8-405f-b10a-2a5346fae5b4" Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.005443 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.005535 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:55 crc kubenswrapper[4787]: E0127 08:07:55.005643 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:07:55 crc kubenswrapper[4787]: E0127 08:07:55.005749 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:07:57.00572117 +0000 UTC m=+982.658076662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "webhook-server-cert" not found Jan 27 08:07:55 crc kubenswrapper[4787]: E0127 08:07:55.005943 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:07:55 crc kubenswrapper[4787]: E0127 08:07:55.006090 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:07:57.006021528 +0000 UTC m=+982.658377140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "metrics-server-cert" not found Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.503415 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" event={"ID":"a21cc0b9-75d2-4ddb-925e-23eeaac2dd35","Type":"ContainerStarted","Data":"7ef3864535971cf1d47a5004d29759b51abf27aa0dfe40f03e886d65c1b944fa"} Jan 27 08:07:55 crc kubenswrapper[4787]: E0127 08:07:55.513532 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" podUID="a21cc0b9-75d2-4ddb-925e-23eeaac2dd35" Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.524198 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" event={"ID":"2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7","Type":"ContainerStarted","Data":"e1834a0dac7deedc263c0873a2ac7b531f871472a7d2c296f0207869bbc0e839"} Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.549371 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" event={"ID":"a552e9e7-f9fa-4b71-9ec6-848a2230279f","Type":"ContainerStarted","Data":"1b5f31b17f22d68dcd85e84abeeb681ebb8be676669effbdd506fe2c2680cb3b"} Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.552212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" event={"ID":"7cae616e-6aa8-405f-b10a-2a5346fae5b4","Type":"ContainerStarted","Data":"107eab0acfdcd94b74305a50569753b82cfa639bb204a4f0f8ade19977c7f5c1"} Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.554756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" event={"ID":"89d7a7e7-443a-486f-b8b7-a024082b9fba","Type":"ContainerStarted","Data":"f51b1dc9927bde3d2ebc96fcdf0f6ba6cf63fed25568162cd40133d5187d30a5"} Jan 27 08:07:55 crc kubenswrapper[4787]: E0127 08:07:55.555003 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" podUID="7cae616e-6aa8-405f-b10a-2a5346fae5b4" Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.564023 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" event={"ID":"601fe40b-5553-4225-b5bd-214428d6fa68","Type":"ContainerStarted","Data":"03a831befd65af301aee0eb9feea4090c097cb6b811d96d224e221128e7e239c"} Jan 27 08:07:55 crc kubenswrapper[4787]: E0127 08:07:55.571341 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" podUID="601fe40b-5553-4225-b5bd-214428d6fa68" Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.615749 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" event={"ID":"7ec92388-f799-43cc-9235-6d4b717bc98e","Type":"ContainerStarted","Data":"ec38725f2bbdc76cb40f38f81bd044c96a957dfb80218ce8b829a6d2af1d778b"} Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.619569 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" event={"ID":"f621b9f3-ead9-4fab-b33f-f9d6179e8f3f","Type":"ContainerStarted","Data":"330c6e5a690fb8746a29b9b9a618e088182e74f96ff95aba590f091110934895"} Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.621114 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" event={"ID":"be114c8e-3aa0-41b4-9954-d07c800d3cfc","Type":"ContainerStarted","Data":"83607832ac2a45723a9201cf6c2ad1cfc4d676eff390ca75c8de7ad98f54f282"} Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.624500 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" event={"ID":"7986c3dd-6d06-4892-9fed-1b396669685b","Type":"ContainerStarted","Data":"f92bac400cb19bab8b6063da65c14f8a6168df5b411b283aa8b61db6ca94bcfe"} Jan 27 08:07:55 crc kubenswrapper[4787]: I0127 08:07:55.628104 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" event={"ID":"292c617a-b64f-4b53-b05e-360769308e43","Type":"ContainerStarted","Data":"fa0cc93f7c0c80a2c1c3d97c946f9567e38bf5acb54637c053b4ca79e9966972"} Jan 27 08:07:55 crc kubenswrapper[4787]: E0127 08:07:55.630218 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" podUID="292c617a-b64f-4b53-b05e-360769308e43" Jan 27 08:07:56 crc kubenswrapper[4787]: I0127 08:07:56.139053 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:07:56 crc kubenswrapper[4787]: E0127 08:07:56.139301 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:07:56 crc kubenswrapper[4787]: E0127 08:07:56.139417 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert podName:c634491f-6069-472d-bff7-d8903d0afa1d nodeName:}" failed. No retries permitted until 2026-01-27 08:08:00.139386682 +0000 UTC m=+985.791742364 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert") pod "infra-operator-controller-manager-7d75bc88d5-drtqw" (UID: "c634491f-6069-472d-bff7-d8903d0afa1d") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:07:56 crc kubenswrapper[4787]: I0127 08:07:56.444314 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:07:56 crc kubenswrapper[4787]: E0127 08:07:56.444601 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:07:56 crc kubenswrapper[4787]: E0127 08:07:56.446481 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert podName:116b0532-a8d5-47ec-8e12-e7ef482094d6 nodeName:}" failed. No retries permitted until 2026-01-27 08:08:00.445858413 +0000 UTC m=+986.098213905 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" (UID: "116b0532-a8d5-47ec-8e12-e7ef482094d6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:07:56 crc kubenswrapper[4787]: E0127 08:07:56.647230 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" podUID="7cae616e-6aa8-405f-b10a-2a5346fae5b4" Jan 27 08:07:56 crc kubenswrapper[4787]: E0127 08:07:56.647661 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" podUID="601fe40b-5553-4225-b5bd-214428d6fa68" Jan 27 08:07:56 crc kubenswrapper[4787]: E0127 08:07:56.647717 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" podUID="a21cc0b9-75d2-4ddb-925e-23eeaac2dd35" Jan 27 08:07:56 crc kubenswrapper[4787]: E0127 08:07:56.647983 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" podUID="292c617a-b64f-4b53-b05e-360769308e43" Jan 27 08:07:57 crc kubenswrapper[4787]: I0127 08:07:57.054874 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:57 crc kubenswrapper[4787]: I0127 08:07:57.054946 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:07:57 crc kubenswrapper[4787]: E0127 08:07:57.055115 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:07:57 crc kubenswrapper[4787]: E0127 08:07:57.055174 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:08:01.055155914 +0000 UTC m=+986.707511406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "metrics-server-cert" not found Jan 27 08:07:57 crc kubenswrapper[4787]: E0127 08:07:57.055220 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:07:57 crc kubenswrapper[4787]: E0127 08:07:57.055241 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:08:01.055235846 +0000 UTC m=+986.707591338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "webhook-server-cert" not found Jan 27 08:08:00 crc kubenswrapper[4787]: I0127 08:08:00.224567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:08:00 crc kubenswrapper[4787]: E0127 08:08:00.224908 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:08:00 crc kubenswrapper[4787]: E0127 08:08:00.225034 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert podName:c634491f-6069-472d-bff7-d8903d0afa1d nodeName:}" failed. No retries permitted until 2026-01-27 08:08:08.225003821 +0000 UTC m=+993.877359483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert") pod "infra-operator-controller-manager-7d75bc88d5-drtqw" (UID: "c634491f-6069-472d-bff7-d8903d0afa1d") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:08:00 crc kubenswrapper[4787]: I0127 08:08:00.530159 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:08:00 crc kubenswrapper[4787]: E0127 08:08:00.530497 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:08:00 crc kubenswrapper[4787]: E0127 08:08:00.530656 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert podName:116b0532-a8d5-47ec-8e12-e7ef482094d6 nodeName:}" failed. No retries permitted until 2026-01-27 08:08:08.530620883 +0000 UTC m=+994.182976405 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" (UID: "116b0532-a8d5-47ec-8e12-e7ef482094d6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:08:01 crc kubenswrapper[4787]: I0127 08:08:01.141132 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:01 crc kubenswrapper[4787]: I0127 08:08:01.141340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:01 crc kubenswrapper[4787]: E0127 08:08:01.141433 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:08:01 crc kubenswrapper[4787]: E0127 08:08:01.141522 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:08:01 crc kubenswrapper[4787]: E0127 08:08:01.141618 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:08:09.141595491 +0000 UTC m=+994.793950983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "webhook-server-cert" not found Jan 27 08:08:01 crc kubenswrapper[4787]: E0127 08:08:01.141643 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs podName:7a0ecc5c-eebe-457b-9a72-804226878c7f nodeName:}" failed. No retries permitted until 2026-01-27 08:08:09.141632962 +0000 UTC m=+994.793988454 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs") pod "openstack-operator-controller-manager-c595dbb59-xc2xn" (UID: "7a0ecc5c-eebe-457b-9a72-804226878c7f") : secret "metrics-server-cert" not found Jan 27 08:08:07 crc kubenswrapper[4787]: E0127 08:08:07.134403 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f" Jan 27 08:08:07 crc kubenswrapper[4787]: E0127 08:08:07.135148 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qv4mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-575ffb885b-57zhq_openstack-operators(f621b9f3-ead9-4fab-b33f-f9d6179e8f3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:08:07 crc kubenswrapper[4787]: E0127 08:08:07.136351 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" podUID="f621b9f3-ead9-4fab-b33f-f9d6179e8f3f" Jan 27 08:08:07 crc kubenswrapper[4787]: E0127 08:08:07.613905 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569" Jan 27 08:08:07 crc kubenswrapper[4787]: E0127 08:08:07.614134 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-vk5vt_openstack-operators(be114c8e-3aa0-41b4-9954-d07c800d3cfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:08:07 crc kubenswrapper[4787]: E0127 08:08:07.615758 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" podUID="be114c8e-3aa0-41b4-9954-d07c800d3cfc" Jan 27 08:08:07 crc kubenswrapper[4787]: E0127 08:08:07.740590 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" podUID="be114c8e-3aa0-41b4-9954-d07c800d3cfc" Jan 27 08:08:07 crc kubenswrapper[4787]: E0127 08:08:07.741200 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:027f3118543388d561b452a9777783b1f866ffaf59d9a1b16a225b1c5636111f\\\"\"" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" podUID="f621b9f3-ead9-4fab-b33f-f9d6179e8f3f" Jan 27 08:08:08 crc kubenswrapper[4787]: I0127 08:08:08.290863 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:08:08 crc kubenswrapper[4787]: I0127 08:08:08.299209 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c634491f-6069-472d-bff7-d8903d0afa1d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-drtqw\" (UID: \"c634491f-6069-472d-bff7-d8903d0afa1d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:08:08 crc kubenswrapper[4787]: I0127 08:08:08.509653 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:08:08 crc kubenswrapper[4787]: I0127 08:08:08.597155 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:08:08 crc kubenswrapper[4787]: I0127 08:08:08.602847 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116b0532-a8d5-47ec-8e12-e7ef482094d6-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq\" (UID: \"116b0532-a8d5-47ec-8e12-e7ef482094d6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:08:08 crc kubenswrapper[4787]: I0127 08:08:08.782458 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:08:09 crc kubenswrapper[4787]: I0127 08:08:09.207804 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:09 crc kubenswrapper[4787]: I0127 08:08:09.208020 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:09 crc kubenswrapper[4787]: I0127 08:08:09.212465 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-webhook-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:09 crc kubenswrapper[4787]: I0127 08:08:09.213307 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a0ecc5c-eebe-457b-9a72-804226878c7f-metrics-certs\") pod \"openstack-operator-controller-manager-c595dbb59-xc2xn\" (UID: \"7a0ecc5c-eebe-457b-9a72-804226878c7f\") " pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:09 crc kubenswrapper[4787]: I0127 08:08:09.483450 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:14 crc kubenswrapper[4787]: E0127 08:08:14.785927 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 27 08:08:14 crc kubenswrapper[4787]: E0127 08:08:14.786645 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzws7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-2k8pd_openstack-operators(32019c4d-61e3-4c27-86d4-3a79bb40ce70): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:08:14 crc kubenswrapper[4787]: E0127 08:08:14.787943 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" podUID="32019c4d-61e3-4c27-86d4-3a79bb40ce70" Jan 27 08:08:14 crc kubenswrapper[4787]: E0127 08:08:14.800276 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" podUID="32019c4d-61e3-4c27-86d4-3a79bb40ce70" Jan 27 08:08:15 crc kubenswrapper[4787]: E0127 08:08:15.266420 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 08:08:15 crc kubenswrapper[4787]: E0127 08:08:15.266987 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g7r9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5lxsh_openstack-operators(2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:08:15 crc kubenswrapper[4787]: E0127 08:08:15.268241 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" podUID="2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7" Jan 27 08:08:15 crc kubenswrapper[4787]: E0127 08:08:15.806526 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" podUID="2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7" Jan 27 08:08:16 crc kubenswrapper[4787]: E0127 08:08:16.701322 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.47:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26" Jan 27 08:08:16 crc kubenswrapper[4787]: E0127 08:08:16.701449 4787 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.47:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26" Jan 27 08:08:16 crc kubenswrapper[4787]: E0127 08:08:16.701690 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.47:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vdjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-57d6b69d8b-s8kvs_openstack-operators(3ad17a28-2cb1-4455-8d38-58919a287ae6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:08:16 crc kubenswrapper[4787]: E0127 08:08:16.702814 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" podUID="3ad17a28-2cb1-4455-8d38-58919a287ae6" Jan 27 08:08:16 crc kubenswrapper[4787]: E0127 08:08:16.819277 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.47:5001/openstack-k8s-operators/nova-operator:1804867abea6e84ca4655efc2800c4a3a97f8b26\\\"\"" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" podUID="3ad17a28-2cb1-4455-8d38-58919a287ae6" Jan 27 08:08:18 crc kubenswrapper[4787]: I0127 08:08:18.833832 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" event={"ID":"be4be3e5-1589-4e84-9d35-f104bc3d5ad4","Type":"ContainerStarted","Data":"0f31d458943d75c1d98606b52d1813ebba2a16540349e53bf19b6976e43ffa35"} Jan 27 08:08:18 crc kubenswrapper[4787]: I0127 08:08:18.835479 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" Jan 27 08:08:18 crc kubenswrapper[4787]: I0127 08:08:18.880026 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" podStartSLOduration=5.005196086 podStartE2EDuration="26.87999664s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.202606085 +0000 UTC m=+979.854961577" lastFinishedPulling="2026-01-27 08:08:16.077406639 +0000 UTC m=+1001.729762131" observedRunningTime="2026-01-27 08:08:18.879160132 +0000 UTC m=+1004.531515624" watchObservedRunningTime="2026-01-27 08:08:18.87999664 +0000 UTC m=+1004.532352152" Jan 27 08:08:18 crc kubenswrapper[4787]: I0127 08:08:18.931418 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn"] Jan 27 08:08:18 crc kubenswrapper[4787]: I0127 08:08:18.945130 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw"] Jan 27 08:08:18 crc kubenswrapper[4787]: W0127 08:08:18.955325 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a0ecc5c_eebe_457b_9a72_804226878c7f.slice/crio-ddb2d1a4499089659acfaa98698f1932dea208cdf10dd56354008bbc81026c46 WatchSource:0}: Error finding container ddb2d1a4499089659acfaa98698f1932dea208cdf10dd56354008bbc81026c46: Status 404 returned error can't find the container with id ddb2d1a4499089659acfaa98698f1932dea208cdf10dd56354008bbc81026c46 Jan 27 08:08:18 crc kubenswrapper[4787]: W0127 08:08:18.979215 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc634491f_6069_472d_bff7_d8903d0afa1d.slice/crio-de9ec2eae17d265ff4e0b874fbf2c37a473b01453b98faa77c82ada974c18bc4 WatchSource:0}: Error finding container de9ec2eae17d265ff4e0b874fbf2c37a473b01453b98faa77c82ada974c18bc4: Status 404 returned error can't find the container with id de9ec2eae17d265ff4e0b874fbf2c37a473b01453b98faa77c82ada974c18bc4 Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.031385 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq"] Jan 27 08:08:19 crc kubenswrapper[4787]: W0127 08:08:19.057608 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod116b0532_a8d5_47ec_8e12_e7ef482094d6.slice/crio-973e696b1f986786fc50d2c8193625a258b0f496dc0d40214f3f6c7debf4fd6e WatchSource:0}: Error finding container 973e696b1f986786fc50d2c8193625a258b0f496dc0d40214f3f6c7debf4fd6e: Status 404 returned error can't find the container with id 973e696b1f986786fc50d2c8193625a258b0f496dc0d40214f3f6c7debf4fd6e Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.849414 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" event={"ID":"c634491f-6069-472d-bff7-d8903d0afa1d","Type":"ContainerStarted","Data":"de9ec2eae17d265ff4e0b874fbf2c37a473b01453b98faa77c82ada974c18bc4"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.856150 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" event={"ID":"dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65","Type":"ContainerStarted","Data":"b5bd5e6d329752bfebf8b5ba174c28170d8e3445e248f0671a1f6513805d45e3"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.856345 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.866302 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" event={"ID":"7a0ecc5c-eebe-457b-9a72-804226878c7f","Type":"ContainerStarted","Data":"2de6f1e5f20624495fa30047f790219dd72f0825754dadf23e6dcd3f0ddbe5ad"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.866358 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" event={"ID":"7a0ecc5c-eebe-457b-9a72-804226878c7f","Type":"ContainerStarted","Data":"ddb2d1a4499089659acfaa98698f1932dea208cdf10dd56354008bbc81026c46"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.866515 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.872299 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" event={"ID":"7986c3dd-6d06-4892-9fed-1b396669685b","Type":"ContainerStarted","Data":"1be013423657fa04611ffefe50af5dcd6810d660d35d2528fd2f945d7dcba752"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.872431 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.884823 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" podStartSLOduration=5.287959079 podStartE2EDuration="27.884775852s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:53.9831216 +0000 UTC m=+979.635477092" lastFinishedPulling="2026-01-27 08:08:16.579938373 +0000 UTC m=+1002.232293865" observedRunningTime="2026-01-27 08:08:19.880857797 +0000 UTC m=+1005.533213289" watchObservedRunningTime="2026-01-27 08:08:19.884775852 +0000 UTC m=+1005.537131364" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.887236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" event={"ID":"f621b9f3-ead9-4fab-b33f-f9d6179e8f3f","Type":"ContainerStarted","Data":"a9287b80626111fc77fa40695afbcb8ab4d027be7418ae412ab1891f08b7b524"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.887513 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.901826 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" event={"ID":"e59e94f5-9033-488d-a186-476cb6cbb3f0","Type":"ContainerStarted","Data":"8ff0a77e5cc8c91e8c6f022dbfd2c9c2acc980b0caa5690479cfe7c634b5f1c0"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.902179 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.907792 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" event={"ID":"ff52e5a2-2aab-451a-8869-72c1a506940a","Type":"ContainerStarted","Data":"38a2e0de94e8f3188ee7fde0ada4d229450b767f6b947b8aa35559222b7d155d"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.907971 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.914696 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" event={"ID":"116b0532-a8d5-47ec-8e12-e7ef482094d6","Type":"ContainerStarted","Data":"973e696b1f986786fc50d2c8193625a258b0f496dc0d40214f3f6c7debf4fd6e"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.929332 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" event={"ID":"292c617a-b64f-4b53-b05e-360769308e43","Type":"ContainerStarted","Data":"fba5e0adbd3c18024e0639401644f737fc506527b0fc0c84900001b8fd783e12"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.929994 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.936399 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" podStartSLOduration=6.296015523 podStartE2EDuration="27.936381387s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.437057375 +0000 UTC m=+980.089412867" lastFinishedPulling="2026-01-27 08:08:16.077423239 +0000 UTC m=+1001.729778731" observedRunningTime="2026-01-27 08:08:19.93422915 +0000 UTC m=+1005.586584642" watchObservedRunningTime="2026-01-27 08:08:19.936381387 +0000 UTC m=+1005.588736879" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.942464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" event={"ID":"7cae616e-6aa8-405f-b10a-2a5346fae5b4","Type":"ContainerStarted","Data":"3196d989fff66b6ff15b1e99011b6e0718e2a77094020f3c80a47155709a22be"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.942852 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.954897 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" event={"ID":"4b5515cb-f539-4a9b-8c46-36c5c62c5c93","Type":"ContainerStarted","Data":"844228fe2286c5cdba986b68dd287d59189c52a7cab3383d1d52734682aab299"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.955798 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.964055 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" event={"ID":"85bafad5-30f3-4931-bd2a-0d45e2b0f844","Type":"ContainerStarted","Data":"76cbf57d6c49b05bdfd746f2e6cc6fafee24a6ad6f3c688eecc680826f7327e9"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.964217 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.969314 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" event={"ID":"601fe40b-5553-4225-b5bd-214428d6fa68","Type":"ContainerStarted","Data":"03aa5290b47cccc13e1f81c46272c88171072ef4530aaa1d0848a606eb021f35"} Jan 27 08:08:19 crc kubenswrapper[4787]: I0127 08:08:19.970185 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.007410 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" podStartSLOduration=28.007379524 podStartE2EDuration="28.007379524s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:19.971867991 +0000 UTC m=+1005.624223493" watchObservedRunningTime="2026-01-27 08:08:20.007379524 +0000 UTC m=+1005.659735026" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.018463 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" event={"ID":"7ec92388-f799-43cc-9235-6d4b717bc98e","Type":"ContainerStarted","Data":"ad7673d247dc1b77c422c17804d53946b34ef7be2378e4c17588feee5e3862b1"} Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.019232 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.039388 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" podStartSLOduration=3.306643052 podStartE2EDuration="28.039359202s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:53.645333788 +0000 UTC m=+979.297689280" lastFinishedPulling="2026-01-27 08:08:18.378049938 +0000 UTC m=+1004.030405430" observedRunningTime="2026-01-27 08:08:20.002934198 +0000 UTC m=+1005.655289690" watchObservedRunningTime="2026-01-27 08:08:20.039359202 +0000 UTC m=+1005.691714714" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.046264 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" event={"ID":"59efc0ff-5727-48f9-91b7-36533ba5f94a","Type":"ContainerStarted","Data":"3ee22adc2b860fbef8f8b83dca320796dd722076ae8abdfae507dc0c1e75861e"} Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.047032 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.061469 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" podStartSLOduration=4.371686856 podStartE2EDuration="28.061441733s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.76803674 +0000 UTC m=+980.420392232" lastFinishedPulling="2026-01-27 08:08:18.457791607 +0000 UTC m=+1004.110147109" observedRunningTime="2026-01-27 08:08:20.035448276 +0000 UTC m=+1005.687803768" watchObservedRunningTime="2026-01-27 08:08:20.061441733 +0000 UTC m=+1005.713797235" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.085499 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" event={"ID":"89d7a7e7-443a-486f-b8b7-a024082b9fba","Type":"ContainerStarted","Data":"15782e1eb7eb94093c1ae1c96c50e2d3bcdddb22c7c9008b13406a1ba759f072"} Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.086039 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.092270 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" podStartSLOduration=4.3332949 podStartE2EDuration="28.092251415s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.690766236 +0000 UTC m=+980.343121728" lastFinishedPulling="2026-01-27 08:08:18.449722751 +0000 UTC m=+1004.102078243" observedRunningTime="2026-01-27 08:08:20.073509566 +0000 UTC m=+1005.725865058" watchObservedRunningTime="2026-01-27 08:08:20.092251415 +0000 UTC m=+1005.744606907" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.105276 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" event={"ID":"a552e9e7-f9fa-4b71-9ec6-848a2230279f","Type":"ContainerStarted","Data":"3a0b5d26ab0e99e59e1f05644c2f72033c6151ac3781abe42dfb805a24b13f9b"} Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.106450 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.119658 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" podStartSLOduration=6.280199037 podStartE2EDuration="28.119635051s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.237992136 +0000 UTC m=+979.890347628" lastFinishedPulling="2026-01-27 08:08:16.07742815 +0000 UTC m=+1001.729783642" observedRunningTime="2026-01-27 08:08:20.106949645 +0000 UTC m=+1005.759305157" watchObservedRunningTime="2026-01-27 08:08:20.119635051 +0000 UTC m=+1005.771990543" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.133662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" event={"ID":"a21cc0b9-75d2-4ddb-925e-23eeaac2dd35","Type":"ContainerStarted","Data":"a37adf6b59116969238245b575db674fd06aeada5b8d8f23014c3dc389766980"} Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.134203 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.143009 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" podStartSLOduration=5.656924892 podStartE2EDuration="28.142988731s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:53.593377325 +0000 UTC m=+979.245732817" lastFinishedPulling="2026-01-27 08:08:16.079441144 +0000 UTC m=+1001.731796656" observedRunningTime="2026-01-27 08:08:20.142198873 +0000 UTC m=+1005.794554375" watchObservedRunningTime="2026-01-27 08:08:20.142988731 +0000 UTC m=+1005.795344233" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.183829 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" podStartSLOduration=3.060496034 podStartE2EDuration="28.18379951s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.426065775 +0000 UTC m=+980.078421267" lastFinishedPulling="2026-01-27 08:08:19.549369251 +0000 UTC m=+1005.201724743" observedRunningTime="2026-01-27 08:08:20.176670335 +0000 UTC m=+1005.829025837" watchObservedRunningTime="2026-01-27 08:08:20.18379951 +0000 UTC m=+1005.836155002" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.244320 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" podStartSLOduration=4.483042234 podStartE2EDuration="28.244293549s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.722713412 +0000 UTC m=+980.375068904" lastFinishedPulling="2026-01-27 08:08:18.483964737 +0000 UTC m=+1004.136320219" observedRunningTime="2026-01-27 08:08:20.243981482 +0000 UTC m=+1005.896336974" watchObservedRunningTime="2026-01-27 08:08:20.244293549 +0000 UTC m=+1005.896649041" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.279846 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" podStartSLOduration=5.380021247 podStartE2EDuration="28.279823474s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:53.677857106 +0000 UTC m=+979.330212598" lastFinishedPulling="2026-01-27 08:08:16.577659333 +0000 UTC m=+1002.230014825" observedRunningTime="2026-01-27 08:08:20.278503805 +0000 UTC m=+1005.930859297" watchObservedRunningTime="2026-01-27 08:08:20.279823474 +0000 UTC m=+1005.932178966" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.332342 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" podStartSLOduration=6.4063432670000005 podStartE2EDuration="28.332314707s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.655004216 +0000 UTC m=+980.307359708" lastFinishedPulling="2026-01-27 08:08:16.580975656 +0000 UTC m=+1002.233331148" observedRunningTime="2026-01-27 08:08:20.316007821 +0000 UTC m=+1005.968363313" watchObservedRunningTime="2026-01-27 08:08:20.332314707 +0000 UTC m=+1005.984670199" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.359922 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" podStartSLOduration=6.117288037 podStartE2EDuration="28.359902229s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.437418544 +0000 UTC m=+980.089774036" lastFinishedPulling="2026-01-27 08:08:16.680032746 +0000 UTC m=+1002.332388228" observedRunningTime="2026-01-27 08:08:20.35630919 +0000 UTC m=+1006.008664682" watchObservedRunningTime="2026-01-27 08:08:20.359902229 +0000 UTC m=+1006.012257721" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.406150 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" podStartSLOduration=6.521752923 podStartE2EDuration="28.406119036s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.193022676 +0000 UTC m=+979.845378168" lastFinishedPulling="2026-01-27 08:08:16.077388789 +0000 UTC m=+1001.729744281" observedRunningTime="2026-01-27 08:08:20.397181691 +0000 UTC m=+1006.049537183" watchObservedRunningTime="2026-01-27 08:08:20.406119036 +0000 UTC m=+1006.058474528" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.423405 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" podStartSLOduration=4.748493359 podStartE2EDuration="28.423388052s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.76161527 +0000 UTC m=+980.413970762" lastFinishedPulling="2026-01-27 08:08:18.436509963 +0000 UTC m=+1004.088865455" observedRunningTime="2026-01-27 08:08:20.422410211 +0000 UTC m=+1006.074765703" watchObservedRunningTime="2026-01-27 08:08:20.423388052 +0000 UTC m=+1006.075743544" Jan 27 08:08:20 crc kubenswrapper[4787]: I0127 08:08:20.455040 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" podStartSLOduration=6.195941301 podStartE2EDuration="28.455008582s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.442257129 +0000 UTC m=+980.094612621" lastFinishedPulling="2026-01-27 08:08:16.7013244 +0000 UTC m=+1002.353679902" observedRunningTime="2026-01-27 08:08:20.443607934 +0000 UTC m=+1006.095963426" watchObservedRunningTime="2026-01-27 08:08:20.455008582 +0000 UTC m=+1006.107364064" Jan 27 08:08:22 crc kubenswrapper[4787]: I0127 08:08:22.823076 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:08:22 crc kubenswrapper[4787]: I0127 08:08:22.823527 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:08:23 crc kubenswrapper[4787]: I0127 08:08:23.078679 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:08:23 crc kubenswrapper[4787]: I0127 08:08:23.158263 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" event={"ID":"c634491f-6069-472d-bff7-d8903d0afa1d","Type":"ContainerStarted","Data":"9a5f9af42577fe0535c92baf06b60bc3890e044dd01088d7c9e763662aae3dc0"} Jan 27 08:08:23 crc kubenswrapper[4787]: I0127 08:08:23.158387 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:08:23 crc kubenswrapper[4787]: I0127 08:08:23.161625 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" event={"ID":"116b0532-a8d5-47ec-8e12-e7ef482094d6","Type":"ContainerStarted","Data":"98f7a5376e8970cb01e8781bb3f4d9c7e3e9eed78127e6726f1a694774c92eda"} Jan 27 08:08:23 crc kubenswrapper[4787]: I0127 08:08:23.161925 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:08:23 crc kubenswrapper[4787]: I0127 08:08:23.191136 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" podStartSLOduration=27.247732896 podStartE2EDuration="31.191107743s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:08:18.983579928 +0000 UTC m=+1004.635935420" lastFinishedPulling="2026-01-27 08:08:22.926954775 +0000 UTC m=+1008.579310267" observedRunningTime="2026-01-27 08:08:23.180140864 +0000 UTC m=+1008.832496346" watchObservedRunningTime="2026-01-27 08:08:23.191107743 +0000 UTC m=+1008.843463235" Jan 27 08:08:23 crc kubenswrapper[4787]: I0127 08:08:23.219230 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" podStartSLOduration=27.344525527 podStartE2EDuration="31.219199346s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:08:19.061924876 +0000 UTC m=+1004.714280368" lastFinishedPulling="2026-01-27 08:08:22.936598705 +0000 UTC m=+1008.588954187" observedRunningTime="2026-01-27 08:08:23.213270857 +0000 UTC m=+1008.865626339" watchObservedRunningTime="2026-01-27 08:08:23.219199346 +0000 UTC m=+1008.871554838" Jan 27 08:08:24 crc kubenswrapper[4787]: I0127 08:08:24.170600 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" event={"ID":"be114c8e-3aa0-41b4-9954-d07c800d3cfc","Type":"ContainerStarted","Data":"2257e99f9ac6108ca9b22a6fd685af6bf45dd232de7c606a67622748d69995ef"} Jan 27 08:08:24 crc kubenswrapper[4787]: I0127 08:08:24.171821 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" Jan 27 08:08:28 crc kubenswrapper[4787]: I0127 08:08:28.518017 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-drtqw" Jan 27 08:08:28 crc kubenswrapper[4787]: I0127 08:08:28.542722 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" podStartSLOduration=7.576213869 podStartE2EDuration="36.542698717s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.685315287 +0000 UTC m=+980.337670779" lastFinishedPulling="2026-01-27 08:08:23.651800135 +0000 UTC m=+1009.304155627" observedRunningTime="2026-01-27 08:08:24.212310033 +0000 UTC m=+1009.864665525" watchObservedRunningTime="2026-01-27 08:08:28.542698717 +0000 UTC m=+1014.195054209" Jan 27 08:08:28 crc kubenswrapper[4787]: I0127 08:08:28.793518 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq" Jan 27 08:08:29 crc kubenswrapper[4787]: I0127 08:08:29.489993 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-c595dbb59-xc2xn" Jan 27 08:08:30 crc kubenswrapper[4787]: I0127 08:08:30.228407 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" event={"ID":"2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7","Type":"ContainerStarted","Data":"2e67358ef5452f8e65588979d6ba23bd18fabf7c82969f0f92f0255c75a39f77"} Jan 27 08:08:30 crc kubenswrapper[4787]: I0127 08:08:30.248764 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lxsh" podStartSLOduration=2.447435819 podStartE2EDuration="37.248745205s" podCreationTimestamp="2026-01-27 08:07:53 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.684667132 +0000 UTC m=+980.337022624" lastFinishedPulling="2026-01-27 08:08:29.485976528 +0000 UTC m=+1015.138332010" observedRunningTime="2026-01-27 08:08:30.246572138 +0000 UTC m=+1015.898927620" watchObservedRunningTime="2026-01-27 08:08:30.248745205 +0000 UTC m=+1015.901100697" Jan 27 08:08:31 crc kubenswrapper[4787]: I0127 08:08:31.237771 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" event={"ID":"3ad17a28-2cb1-4455-8d38-58919a287ae6","Type":"ContainerStarted","Data":"e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488"} Jan 27 08:08:31 crc kubenswrapper[4787]: I0127 08:08:31.238342 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:08:31 crc kubenswrapper[4787]: I0127 08:08:31.239735 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" event={"ID":"32019c4d-61e3-4c27-86d4-3a79bb40ce70","Type":"ContainerStarted","Data":"ea44ab5dd60dbac9f99e762e21c703c8457dfab59c68b898d6406903dc281b4a"} Jan 27 08:08:31 crc kubenswrapper[4787]: I0127 08:08:31.239894 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" Jan 27 08:08:31 crc kubenswrapper[4787]: I0127 08:08:31.258369 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" podStartSLOduration=3.2881490270000002 podStartE2EDuration="39.258353473s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:54.187459414 +0000 UTC m=+979.839814896" lastFinishedPulling="2026-01-27 08:08:30.15766385 +0000 UTC m=+1015.810019342" observedRunningTime="2026-01-27 08:08:31.25732867 +0000 UTC m=+1016.909684152" watchObservedRunningTime="2026-01-27 08:08:31.258353473 +0000 UTC m=+1016.910708965" Jan 27 08:08:31 crc kubenswrapper[4787]: I0127 08:08:31.277585 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" podStartSLOduration=2.229673545 podStartE2EDuration="39.277550591s" podCreationTimestamp="2026-01-27 08:07:52 +0000 UTC" firstStartedPulling="2026-01-27 08:07:53.447004184 +0000 UTC m=+979.099359676" lastFinishedPulling="2026-01-27 08:08:30.49488123 +0000 UTC m=+1016.147236722" observedRunningTime="2026-01-27 08:08:31.274384482 +0000 UTC m=+1016.926739974" watchObservedRunningTime="2026-01-27 08:08:31.277550591 +0000 UTC m=+1016.929906083" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.452944 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6tnpg" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.684843 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-j7klk" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.731256 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-d5r5z" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.807381 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-627g6" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.846686 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-kk6mj" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.864187 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-57zhq" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.891035 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vkmnm" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.891135 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tbr5m" Jan 27 08:08:32 crc kubenswrapper[4787]: I0127 08:08:32.956591 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-88gn4" Jan 27 08:08:33 crc kubenswrapper[4787]: I0127 08:08:33.021419 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-vk5vt" Jan 27 08:08:33 crc kubenswrapper[4787]: I0127 08:08:33.025257 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qhnz4" Jan 27 08:08:33 crc kubenswrapper[4787]: I0127 08:08:33.113759 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-x4nz5" Jan 27 08:08:33 crc kubenswrapper[4787]: I0127 08:08:33.145521 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-cqpjk" Jan 27 08:08:33 crc kubenswrapper[4787]: I0127 08:08:33.245266 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-l9mc7" Jan 27 08:08:33 crc kubenswrapper[4787]: I0127 08:08:33.360837 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-pxbjw" Jan 27 08:08:33 crc kubenswrapper[4787]: I0127 08:08:33.466513 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-dnxpj" Jan 27 08:08:33 crc kubenswrapper[4787]: I0127 08:08:33.500714 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-8cmb9" Jan 27 08:08:42 crc kubenswrapper[4787]: I0127 08:08:42.715595 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-2k8pd" Jan 27 08:08:42 crc kubenswrapper[4787]: I0127 08:08:42.778184 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.767428 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.769483 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.773541 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.774480 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.776340 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-f62tp" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.777448 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.777678 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.777868 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.778013 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.790888 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.857465 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.857616 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878681 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878765 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878791 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878821 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878835 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxjz\" (UniqueName: \"kubernetes.io/projected/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-kube-api-access-6bxjz\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878861 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a6dbd2e-1018-4ddf-90f4-9f08dd92aff2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a6dbd2e-1018-4ddf-90f4-9f08dd92aff2\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878912 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.878937 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980049 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980101 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980134 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980149 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bxjz\" (UniqueName: \"kubernetes.io/projected/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-kube-api-access-6bxjz\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980180 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a6dbd2e-1018-4ddf-90f4-9f08dd92aff2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a6dbd2e-1018-4ddf-90f4-9f08dd92aff2\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980207 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980228 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980253 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.980295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.981503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.981771 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.982030 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.982431 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.987581 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.987685 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.995734 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 08:08:52 crc kubenswrapper[4787]: I0127 08:08:52.995788 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a6dbd2e-1018-4ddf-90f4-9f08dd92aff2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a6dbd2e-1018-4ddf-90f4-9f08dd92aff2\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47535e156a7e386aa4a04a5f1bf6b12c634363205396f5d1030723c8fdd236ad/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.004478 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bxjz\" (UniqueName: \"kubernetes.io/projected/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-kube-api-access-6bxjz\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.007468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f84bfff4-b4f2-40d8-8b81-a9e5eb776442-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.024840 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.025839 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a6dbd2e-1018-4ddf-90f4-9f08dd92aff2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a6dbd2e-1018-4ddf-90f4-9f08dd92aff2\") pod \"rabbitmq-server-0\" (UID: \"f84bfff4-b4f2-40d8-8b81-a9e5eb776442\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.030417 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.036169 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.036377 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.036532 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-w4jtp" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.036564 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.041622 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.044088 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.091870 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.194509 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68be2504-1de8-427c-9937-bd0428f7c5c4-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.195043 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.195069 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68be2504-1de8-427c-9937-bd0428f7c5c4-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.195098 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68be2504-1de8-427c-9937-bd0428f7c5c4-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.195146 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzmt\" (UniqueName: \"kubernetes.io/projected/68be2504-1de8-427c-9937-bd0428f7c5c4-kube-api-access-fmzmt\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.195480 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.195870 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6f32bada-121e-4882-a6c8-085f9814aca8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f32bada-121e-4882-a6c8-085f9814aca8\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.195919 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68be2504-1de8-427c-9937-bd0428f7c5c4-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.195959 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297379 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmzmt\" (UniqueName: \"kubernetes.io/projected/68be2504-1de8-427c-9937-bd0428f7c5c4-kube-api-access-fmzmt\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297425 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297497 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6f32bada-121e-4882-a6c8-085f9814aca8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f32bada-121e-4882-a6c8-085f9814aca8\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297525 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68be2504-1de8-427c-9937-bd0428f7c5c4-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297590 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68be2504-1de8-427c-9937-bd0428f7c5c4-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297612 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297632 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68be2504-1de8-427c-9937-bd0428f7c5c4-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.297653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68be2504-1de8-427c-9937-bd0428f7c5c4-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.299018 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.299383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68be2504-1de8-427c-9937-bd0428f7c5c4-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.299967 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.300058 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68be2504-1de8-427c-9937-bd0428f7c5c4-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.305236 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68be2504-1de8-427c-9937-bd0428f7c5c4-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.305308 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.305333 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6f32bada-121e-4882-a6c8-085f9814aca8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f32bada-121e-4882-a6c8-085f9814aca8\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13f5f52254cfc6354b5820b49c0e7af456791e629d5b614bbb3f5fbac1a0f064/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.306138 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68be2504-1de8-427c-9937-bd0428f7c5c4-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.320640 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68be2504-1de8-427c-9937-bd0428f7c5c4-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.328040 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmzmt\" (UniqueName: \"kubernetes.io/projected/68be2504-1de8-427c-9937-bd0428f7c5c4-kube-api-access-fmzmt\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.357092 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.359388 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.365077 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.366380 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.366601 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.367151 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-f2rj8" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.368385 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.368734 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.371877 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6f32bada-121e-4882-a6c8-085f9814aca8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f32bada-121e-4882-a6c8-085f9814aca8\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"68be2504-1de8-427c-9937-bd0428f7c5c4\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.382622 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.500619 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faff0a15-880a-4cf7-a0e0-81d573ace274-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.500669 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pl46\" (UniqueName: \"kubernetes.io/projected/faff0a15-880a-4cf7-a0e0-81d573ace274-kube-api-access-4pl46\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.500695 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faff0a15-880a-4cf7-a0e0-81d573ace274-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.500736 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faff0a15-880a-4cf7-a0e0-81d573ace274-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.500829 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faff0a15-880a-4cf7-a0e0-81d573ace274-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.501116 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.501184 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9be3783e-45f7-4421-9e4f-1bba804afd6b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9be3783e-45f7-4421-9e4f-1bba804afd6b\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.501346 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.501382 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.541061 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.602686 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faff0a15-880a-4cf7-a0e0-81d573ace274-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.603286 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faff0a15-880a-4cf7-a0e0-81d573ace274-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.603313 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faff0a15-880a-4cf7-a0e0-81d573ace274-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.603376 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.603424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9be3783e-45f7-4421-9e4f-1bba804afd6b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9be3783e-45f7-4421-9e4f-1bba804afd6b\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.603479 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.603511 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.603645 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faff0a15-880a-4cf7-a0e0-81d573ace274-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.603683 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pl46\" (UniqueName: \"kubernetes.io/projected/faff0a15-880a-4cf7-a0e0-81d573ace274-kube-api-access-4pl46\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.604296 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faff0a15-880a-4cf7-a0e0-81d573ace274-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.604888 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.605000 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.605289 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faff0a15-880a-4cf7-a0e0-81d573ace274-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.607016 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faff0a15-880a-4cf7-a0e0-81d573ace274-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.608315 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faff0a15-880a-4cf7-a0e0-81d573ace274-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.615243 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.615287 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9be3783e-45f7-4421-9e4f-1bba804afd6b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9be3783e-45f7-4421-9e4f-1bba804afd6b\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4938bfb1fada77ff175a504d95a3353fd8acb06803a7715257215a5ef30da24a/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.616516 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faff0a15-880a-4cf7-a0e0-81d573ace274-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.631814 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pl46\" (UniqueName: \"kubernetes.io/projected/faff0a15-880a-4cf7-a0e0-81d573ace274-kube-api-access-4pl46\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.646586 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.669087 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9be3783e-45f7-4421-9e4f-1bba804afd6b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9be3783e-45f7-4421-9e4f-1bba804afd6b\") pod \"rabbitmq-cell1-server-0\" (UID: \"faff0a15-880a-4cf7-a0e0-81d573ace274\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.689284 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.701345 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.702577 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.707340 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-default-user" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.707543 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.709249 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-notifications-server-conf" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.709525 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-notifications-server-dockercfg-95rc9" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.712944 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-notifications-plugins-conf" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.727702 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.817749 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.817866 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d9a6f95-2510-4e74-b4f7-fb592d761c91-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.817983 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fkdp\" (UniqueName: \"kubernetes.io/projected/1d9a6f95-2510-4e74-b4f7-fb592d761c91-kube-api-access-5fkdp\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.818143 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d9a6f95-2510-4e74-b4f7-fb592d761c91-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.818177 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f0f05e97-a20e-4f94-aba0-aaf49d5c6d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0f05e97-a20e-4f94-aba0-aaf49d5c6d61\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.818211 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d9a6f95-2510-4e74-b4f7-fb592d761c91-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.818238 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.818255 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.818278 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d9a6f95-2510-4e74-b4f7-fb592d761c91-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919648 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fkdp\" (UniqueName: \"kubernetes.io/projected/1d9a6f95-2510-4e74-b4f7-fb592d761c91-kube-api-access-5fkdp\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919701 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d9a6f95-2510-4e74-b4f7-fb592d761c91-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f0f05e97-a20e-4f94-aba0-aaf49d5c6d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0f05e97-a20e-4f94-aba0-aaf49d5c6d61\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919756 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d9a6f95-2510-4e74-b4f7-fb592d761c91-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919779 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919799 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d9a6f95-2510-4e74-b4f7-fb592d761c91-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919846 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.919887 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d9a6f95-2510-4e74-b4f7-fb592d761c91-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.921225 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.921478 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d9a6f95-2510-4e74-b4f7-fb592d761c91-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.922279 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.922393 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d9a6f95-2510-4e74-b4f7-fb592d761c91-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.924074 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d9a6f95-2510-4e74-b4f7-fb592d761c91-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.924108 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.924136 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f0f05e97-a20e-4f94-aba0-aaf49d5c6d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0f05e97-a20e-4f94-aba0-aaf49d5c6d61\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5726dae6ba81680bf46e46cc2901243be22c80d297c0bbc37108d63d7f136958/globalmount\"" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.925747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d9a6f95-2510-4e74-b4f7-fb592d761c91-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.926837 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d9a6f95-2510-4e74-b4f7-fb592d761c91-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.944438 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fkdp\" (UniqueName: \"kubernetes.io/projected/1d9a6f95-2510-4e74-b4f7-fb592d761c91-kube-api-access-5fkdp\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.953418 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f0f05e97-a20e-4f94-aba0-aaf49d5c6d61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0f05e97-a20e-4f94-aba0-aaf49d5c6d61\") pod \"rabbitmq-notifications-server-0\" (UID: \"1d9a6f95-2510-4e74-b4f7-fb592d761c91\") " pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.976075 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.977677 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.984457 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.984694 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.984873 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.984948 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-4zpns" Jan 27 08:08:53 crc kubenswrapper[4787]: I0127 08:08:53.990788 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.004915 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.035214 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.129825 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bsbt\" (UniqueName: \"kubernetes.io/projected/144491fe-49b9-4a76-8da2-db798cf6a1e4-kube-api-access-4bsbt\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.130119 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.130268 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/144491fe-49b9-4a76-8da2-db798cf6a1e4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.130404 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/144491fe-49b9-4a76-8da2-db798cf6a1e4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.130470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0c1e6e82-ab87-4992-97e4-a29fbbb75e54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c1e6e82-ab87-4992-97e4-a29fbbb75e54\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.130643 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-kolla-config\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.130701 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/144491fe-49b9-4a76-8da2-db798cf6a1e4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.130748 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-config-data-default\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.146938 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 27 08:08:54 crc kubenswrapper[4787]: W0127 08:08:54.164710 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaff0a15_880a_4cf7_a0e0_81d573ace274.slice/crio-00921be760b5ce1051039b98d15ddc8ffa501aa8a39821ad9e6ae171814f6477 WatchSource:0}: Error finding container 00921be760b5ce1051039b98d15ddc8ffa501aa8a39821ad9e6ae171814f6477: Status 404 returned error can't find the container with id 00921be760b5ce1051039b98d15ddc8ffa501aa8a39821ad9e6ae171814f6477 Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.238907 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.239003 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/144491fe-49b9-4a76-8da2-db798cf6a1e4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.239073 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/144491fe-49b9-4a76-8da2-db798cf6a1e4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.239103 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0c1e6e82-ab87-4992-97e4-a29fbbb75e54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c1e6e82-ab87-4992-97e4-a29fbbb75e54\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.239146 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-kolla-config\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.239174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/144491fe-49b9-4a76-8da2-db798cf6a1e4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.239195 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-config-data-default\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.239223 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bsbt\" (UniqueName: \"kubernetes.io/projected/144491fe-49b9-4a76-8da2-db798cf6a1e4-kube-api-access-4bsbt\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.252402 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.252420 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/144491fe-49b9-4a76-8da2-db798cf6a1e4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.252978 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-config-data-default\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.256870 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/144491fe-49b9-4a76-8da2-db798cf6a1e4-kolla-config\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.264043 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/144491fe-49b9-4a76-8da2-db798cf6a1e4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.267830 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/144491fe-49b9-4a76-8da2-db798cf6a1e4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.268480 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.268523 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0c1e6e82-ab87-4992-97e4-a29fbbb75e54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c1e6e82-ab87-4992-97e4-a29fbbb75e54\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/51a2db4ae30aef6d7294853f76f947ddfd9610e76fc9480e73634ab28e724158/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.278206 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bsbt\" (UniqueName: \"kubernetes.io/projected/144491fe-49b9-4a76-8da2-db798cf6a1e4-kube-api-access-4bsbt\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.337566 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0c1e6e82-ab87-4992-97e4-a29fbbb75e54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c1e6e82-ab87-4992-97e4-a29fbbb75e54\") pod \"openstack-galera-0\" (UID: \"144491fe-49b9-4a76-8da2-db798cf6a1e4\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.359312 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.395804 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.398792 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.403222 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-wdr2q" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.403441 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.422021 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.468802 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"68be2504-1de8-427c-9937-bd0428f7c5c4","Type":"ContainerStarted","Data":"9ea2cf6619889b10230a361f5a1a8f72e024b5a3185d7c9014ec65a08c5dea27"} Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.476514 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"f84bfff4-b4f2-40d8-8b81-a9e5eb776442","Type":"ContainerStarted","Data":"7bdf39254feca0fe0d095f1c43851a7f4782f3195c8d7fab4715c3e0b9adfb11"} Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.478726 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"faff0a15-880a-4cf7-a0e0-81d573ace274","Type":"ContainerStarted","Data":"00921be760b5ce1051039b98d15ddc8ffa501aa8a39821ad9e6ae171814f6477"} Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.549311 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0fc932b-0d21-426c-9b58-15c14ad9e95b-config-data\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.549892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0fc932b-0d21-426c-9b58-15c14ad9e95b-kolla-config\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.549995 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztlk\" (UniqueName: \"kubernetes.io/projected/b0fc932b-0d21-426c-9b58-15c14ad9e95b-kube-api-access-tztlk\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.641938 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-notifications-server-0"] Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.652023 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0fc932b-0d21-426c-9b58-15c14ad9e95b-config-data\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.652174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0fc932b-0d21-426c-9b58-15c14ad9e95b-kolla-config\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.652203 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztlk\" (UniqueName: \"kubernetes.io/projected/b0fc932b-0d21-426c-9b58-15c14ad9e95b-kube-api-access-tztlk\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.653286 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0fc932b-0d21-426c-9b58-15c14ad9e95b-config-data\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.653328 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0fc932b-0d21-426c-9b58-15c14ad9e95b-kolla-config\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: W0127 08:08:54.663169 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d9a6f95_2510_4e74_b4f7_fb592d761c91.slice/crio-ed6bb6f4d3c10e5f249cf88895cdfa708b5f24c7419f2e18f6691df2ff20145c WatchSource:0}: Error finding container ed6bb6f4d3c10e5f249cf88895cdfa708b5f24c7419f2e18f6691df2ff20145c: Status 404 returned error can't find the container with id ed6bb6f4d3c10e5f249cf88895cdfa708b5f24c7419f2e18f6691df2ff20145c Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.672222 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztlk\" (UniqueName: \"kubernetes.io/projected/b0fc932b-0d21-426c-9b58-15c14ad9e95b-kube-api-access-tztlk\") pod \"memcached-0\" (UID: \"b0fc932b-0d21-426c-9b58-15c14ad9e95b\") " pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.749510 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 27 08:08:54 crc kubenswrapper[4787]: I0127 08:08:54.953297 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 27 08:08:54 crc kubenswrapper[4787]: W0127 08:08:54.985683 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod144491fe_49b9_4a76_8da2_db798cf6a1e4.slice/crio-e3226f087d6e245096d06155c2977662dd54d793d0d964e86c97b52b51ef5388 WatchSource:0}: Error finding container e3226f087d6e245096d06155c2977662dd54d793d0d964e86c97b52b51ef5388: Status 404 returned error can't find the container with id e3226f087d6e245096d06155c2977662dd54d793d0d964e86c97b52b51ef5388 Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.075868 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.453224 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.456015 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.460473 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.461800 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.462009 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.462184 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-dnprn" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.486163 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.493039 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"1d9a6f95-2510-4e74-b4f7-fb592d761c91","Type":"ContainerStarted","Data":"ed6bb6f4d3c10e5f249cf88895cdfa708b5f24c7419f2e18f6691df2ff20145c"} Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.497463 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"b0fc932b-0d21-426c-9b58-15c14ad9e95b","Type":"ContainerStarted","Data":"9d55989abeff56ebd57b0db618082786dc16023f5c5b9ee58731b4952ed38287"} Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.499973 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"144491fe-49b9-4a76-8da2-db798cf6a1e4","Type":"ContainerStarted","Data":"e3226f087d6e245096d06155c2977662dd54d793d0d964e86c97b52b51ef5388"} Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.583759 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lmrn\" (UniqueName: \"kubernetes.io/projected/841b999e-6e42-4f28-8fd8-334f69c3e3e6-kube-api-access-4lmrn\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.583812 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841b999e-6e42-4f28-8fd8-334f69c3e3e6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.583834 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/841b999e-6e42-4f28-8fd8-334f69c3e3e6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.584106 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.584138 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14d23fb8-f9f3-4b2c-8c73-3e8cc19dad87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14d23fb8-f9f3-4b2c-8c73-3e8cc19dad87\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.584154 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.584177 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/841b999e-6e42-4f28-8fd8-334f69c3e3e6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.584218 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.685356 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lmrn\" (UniqueName: \"kubernetes.io/projected/841b999e-6e42-4f28-8fd8-334f69c3e3e6-kube-api-access-4lmrn\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.685809 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841b999e-6e42-4f28-8fd8-334f69c3e3e6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.685844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/841b999e-6e42-4f28-8fd8-334f69c3e3e6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.685918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.685982 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-14d23fb8-f9f3-4b2c-8c73-3e8cc19dad87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14d23fb8-f9f3-4b2c-8c73-3e8cc19dad87\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.686013 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.686053 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/841b999e-6e42-4f28-8fd8-334f69c3e3e6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.686108 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.691465 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/841b999e-6e42-4f28-8fd8-334f69c3e3e6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.692006 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.692518 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.693805 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/841b999e-6e42-4f28-8fd8-334f69c3e3e6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.694673 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.694702 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-14d23fb8-f9f3-4b2c-8c73-3e8cc19dad87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14d23fb8-f9f3-4b2c-8c73-3e8cc19dad87\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/124149b2461db4c62bd3ac0268dc92a81b0f2a8d060cca5c653c25533aaa0b7c/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.701601 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841b999e-6e42-4f28-8fd8-334f69c3e3e6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.702420 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/841b999e-6e42-4f28-8fd8-334f69c3e3e6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.706436 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lmrn\" (UniqueName: \"kubernetes.io/projected/841b999e-6e42-4f28-8fd8-334f69c3e3e6-kube-api-access-4lmrn\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.738734 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-14d23fb8-f9f3-4b2c-8c73-3e8cc19dad87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14d23fb8-f9f3-4b2c-8c73-3e8cc19dad87\") pod \"openstack-cell1-galera-0\" (UID: \"841b999e-6e42-4f28-8fd8-334f69c3e3e6\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:55 crc kubenswrapper[4787]: I0127 08:08:55.792873 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:08:56 crc kubenswrapper[4787]: I0127 08:08:56.453878 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 27 08:08:56 crc kubenswrapper[4787]: I0127 08:08:56.526228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"841b999e-6e42-4f28-8fd8-334f69c3e3e6","Type":"ContainerStarted","Data":"c5a33ff6618a7b8dfde48f26982ff15af5a6d3de16ffa969b25adc99494ed3a8"} Jan 27 08:09:08 crc kubenswrapper[4787]: I0127 08:09:08.843041 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"b0fc932b-0d21-426c-9b58-15c14ad9e95b","Type":"ContainerStarted","Data":"67e3f1370a7cab2f2c0368ba565165396abfb9d407f372525152832c659a03ab"} Jan 27 08:09:08 crc kubenswrapper[4787]: I0127 08:09:08.843978 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 27 08:09:08 crc kubenswrapper[4787]: I0127 08:09:08.846617 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"144491fe-49b9-4a76-8da2-db798cf6a1e4","Type":"ContainerStarted","Data":"4b8b94542d9b3f0005adea1a98c96aea842556c492a4a9bd9428e58b7ae1776c"} Jan 27 08:09:08 crc kubenswrapper[4787]: I0127 08:09:08.848286 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"841b999e-6e42-4f28-8fd8-334f69c3e3e6","Type":"ContainerStarted","Data":"f5d5c42142443823eacee17797aa68ba88ae203cfa5b52b13ad3b91e87f9abf6"} Jan 27 08:09:08 crc kubenswrapper[4787]: I0127 08:09:08.869726 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=2.186614048 podStartE2EDuration="14.869696828s" podCreationTimestamp="2026-01-27 08:08:54 +0000 UTC" firstStartedPulling="2026-01-27 08:08:55.143931268 +0000 UTC m=+1040.796286770" lastFinishedPulling="2026-01-27 08:09:07.827014038 +0000 UTC m=+1053.479369550" observedRunningTime="2026-01-27 08:09:08.869043014 +0000 UTC m=+1054.521398516" watchObservedRunningTime="2026-01-27 08:09:08.869696828 +0000 UTC m=+1054.522052340" Jan 27 08:09:10 crc kubenswrapper[4787]: I0127 08:09:10.863449 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"faff0a15-880a-4cf7-a0e0-81d573ace274","Type":"ContainerStarted","Data":"e5125bf2a44027a200b519f739fdd0eef6bd684deedf878b3831835beda70a71"} Jan 27 08:09:10 crc kubenswrapper[4787]: I0127 08:09:10.865507 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"1d9a6f95-2510-4e74-b4f7-fb592d761c91","Type":"ContainerStarted","Data":"013b2f12459c08307712f9e501100fc81d14973b010bcdc9c24006a8c2afc6b8"} Jan 27 08:09:10 crc kubenswrapper[4787]: I0127 08:09:10.868850 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"68be2504-1de8-427c-9937-bd0428f7c5c4","Type":"ContainerStarted","Data":"60be7c927e866796541fb6c53b2b04db37adbfe1b02bf03938d89ea5c5370982"} Jan 27 08:09:10 crc kubenswrapper[4787]: I0127 08:09:10.871684 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"f84bfff4-b4f2-40d8-8b81-a9e5eb776442","Type":"ContainerStarted","Data":"b35fd8bea3d2d9361614b10dbc01f3a16983ab9d5a38a13fd4df6390c81bb921"} Jan 27 08:09:13 crc kubenswrapper[4787]: I0127 08:09:13.906484 4787 generic.go:334] "Generic (PLEG): container finished" podID="144491fe-49b9-4a76-8da2-db798cf6a1e4" containerID="4b8b94542d9b3f0005adea1a98c96aea842556c492a4a9bd9428e58b7ae1776c" exitCode=0 Jan 27 08:09:13 crc kubenswrapper[4787]: I0127 08:09:13.906595 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"144491fe-49b9-4a76-8da2-db798cf6a1e4","Type":"ContainerDied","Data":"4b8b94542d9b3f0005adea1a98c96aea842556c492a4a9bd9428e58b7ae1776c"} Jan 27 08:09:13 crc kubenswrapper[4787]: I0127 08:09:13.912318 4787 generic.go:334] "Generic (PLEG): container finished" podID="841b999e-6e42-4f28-8fd8-334f69c3e3e6" containerID="f5d5c42142443823eacee17797aa68ba88ae203cfa5b52b13ad3b91e87f9abf6" exitCode=0 Jan 27 08:09:13 crc kubenswrapper[4787]: I0127 08:09:13.912982 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"841b999e-6e42-4f28-8fd8-334f69c3e3e6","Type":"ContainerDied","Data":"f5d5c42142443823eacee17797aa68ba88ae203cfa5b52b13ad3b91e87f9abf6"} Jan 27 08:09:14 crc kubenswrapper[4787]: I0127 08:09:14.751368 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 27 08:09:14 crc kubenswrapper[4787]: I0127 08:09:14.923590 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"841b999e-6e42-4f28-8fd8-334f69c3e3e6","Type":"ContainerStarted","Data":"dd6a00442008c713c575f1ab05ebb090e7f5790a7479efa76fc3ab8a2d2f2a13"} Jan 27 08:09:14 crc kubenswrapper[4787]: I0127 08:09:14.926349 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"144491fe-49b9-4a76-8da2-db798cf6a1e4","Type":"ContainerStarted","Data":"b341bda581055b5a3691fb44350e38e84e31d5b1ca393c095829a7e67cdf9f4c"} Jan 27 08:09:14 crc kubenswrapper[4787]: I0127 08:09:14.951167 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=9.581992863 podStartE2EDuration="20.951146096s" podCreationTimestamp="2026-01-27 08:08:54 +0000 UTC" firstStartedPulling="2026-01-27 08:08:56.506729384 +0000 UTC m=+1042.159084876" lastFinishedPulling="2026-01-27 08:09:07.875882617 +0000 UTC m=+1053.528238109" observedRunningTime="2026-01-27 08:09:14.943140261 +0000 UTC m=+1060.595495773" watchObservedRunningTime="2026-01-27 08:09:14.951146096 +0000 UTC m=+1060.603501588" Jan 27 08:09:14 crc kubenswrapper[4787]: I0127 08:09:14.980105 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=10.045044712 podStartE2EDuration="22.980079595s" podCreationTimestamp="2026-01-27 08:08:52 +0000 UTC" firstStartedPulling="2026-01-27 08:08:54.990673518 +0000 UTC m=+1040.643029010" lastFinishedPulling="2026-01-27 08:09:07.925708401 +0000 UTC m=+1053.578063893" observedRunningTime="2026-01-27 08:09:14.976734648 +0000 UTC m=+1060.629090150" watchObservedRunningTime="2026-01-27 08:09:14.980079595 +0000 UTC m=+1060.632435087" Jan 27 08:09:15 crc kubenswrapper[4787]: I0127 08:09:15.794376 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:09:15 crc kubenswrapper[4787]: I0127 08:09:15.794424 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:09:18 crc kubenswrapper[4787]: E0127 08:09:18.998011 4787 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.181:51570->38.102.83.181:36351: write tcp 38.102.83.181:51570->38.102.83.181:36351: write: broken pipe Jan 27 08:09:19 crc kubenswrapper[4787]: I0127 08:09:19.900585 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:09:19 crc kubenswrapper[4787]: I0127 08:09:19.990901 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 08:09:22 crc kubenswrapper[4787]: I0127 08:09:22.822385 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:09:22 crc kubenswrapper[4787]: I0127 08:09:22.822842 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:09:22 crc kubenswrapper[4787]: I0127 08:09:22.822899 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 08:09:22 crc kubenswrapper[4787]: I0127 08:09:22.823659 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8407f8fd138ff9e300a7e8488e02cc30a66d50ed89a7c4ef1d3339c94e2a22b5"} pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:09:22 crc kubenswrapper[4787]: I0127 08:09:22.823735 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" containerID="cri-o://8407f8fd138ff9e300a7e8488e02cc30a66d50ed89a7c4ef1d3339c94e2a22b5" gracePeriod=600 Jan 27 08:09:23 crc kubenswrapper[4787]: I0127 08:09:23.000482 4787 generic.go:334] "Generic (PLEG): container finished" podID="f051e184-acac-47cf-9e04-9df648288715" containerID="8407f8fd138ff9e300a7e8488e02cc30a66d50ed89a7c4ef1d3339c94e2a22b5" exitCode=0 Jan 27 08:09:23 crc kubenswrapper[4787]: I0127 08:09:23.000564 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerDied","Data":"8407f8fd138ff9e300a7e8488e02cc30a66d50ed89a7c4ef1d3339c94e2a22b5"} Jan 27 08:09:23 crc kubenswrapper[4787]: I0127 08:09:23.000619 4787 scope.go:117] "RemoveContainer" containerID="76c11a5da51cedd24f4d664015e61242e532ce42823eb519b69423acc27d454d" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.013321 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"62fa4f58004172098a708acf93c4ba2c2d75c5b2ad098327437e26bfa28ab448"} Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.361314 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.362513 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.449799 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.524487 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-c6t44"] Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.525784 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.528972 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.539115 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-c6t44"] Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.630679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-operator-scripts\") pod \"root-account-create-update-c6t44\" (UID: \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\") " pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.630958 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllsc\" (UniqueName: \"kubernetes.io/projected/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-kube-api-access-hllsc\") pod \"root-account-create-update-c6t44\" (UID: \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\") " pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.732613 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-operator-scripts\") pod \"root-account-create-update-c6t44\" (UID: \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\") " pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.732890 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllsc\" (UniqueName: \"kubernetes.io/projected/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-kube-api-access-hllsc\") pod \"root-account-create-update-c6t44\" (UID: \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\") " pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.735372 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-operator-scripts\") pod \"root-account-create-update-c6t44\" (UID: \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\") " pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.757215 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllsc\" (UniqueName: \"kubernetes.io/projected/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-kube-api-access-hllsc\") pod \"root-account-create-update-c6t44\" (UID: \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\") " pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:24 crc kubenswrapper[4787]: I0127 08:09:24.847683 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:25 crc kubenswrapper[4787]: I0127 08:09:25.106561 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 27 08:09:25 crc kubenswrapper[4787]: I0127 08:09:25.293927 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-c6t44"] Jan 27 08:09:26 crc kubenswrapper[4787]: I0127 08:09:26.038942 4787 generic.go:334] "Generic (PLEG): container finished" podID="8d295e1a-f5ae-49e1-aa8c-75a73af430a4" containerID="d304505100a9e40f977481818afbc2239ed1910ed8a9349b8d4d47740a9431bb" exitCode=0 Jan 27 08:09:26 crc kubenswrapper[4787]: I0127 08:09:26.039041 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-c6t44" event={"ID":"8d295e1a-f5ae-49e1-aa8c-75a73af430a4","Type":"ContainerDied","Data":"d304505100a9e40f977481818afbc2239ed1910ed8a9349b8d4d47740a9431bb"} Jan 27 08:09:26 crc kubenswrapper[4787]: I0127 08:09:26.039422 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-c6t44" event={"ID":"8d295e1a-f5ae-49e1-aa8c-75a73af430a4","Type":"ContainerStarted","Data":"55910de522cd1c628eb820391f2a22eac660770fb65284eb1346248d992f8f13"} Jan 27 08:09:27 crc kubenswrapper[4787]: I0127 08:09:27.430009 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:27 crc kubenswrapper[4787]: I0127 08:09:27.590122 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-operator-scripts\") pod \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\" (UID: \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\") " Jan 27 08:09:27 crc kubenswrapper[4787]: I0127 08:09:27.590238 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hllsc\" (UniqueName: \"kubernetes.io/projected/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-kube-api-access-hllsc\") pod \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\" (UID: \"8d295e1a-f5ae-49e1-aa8c-75a73af430a4\") " Jan 27 08:09:27 crc kubenswrapper[4787]: I0127 08:09:27.591635 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d295e1a-f5ae-49e1-aa8c-75a73af430a4" (UID: "8d295e1a-f5ae-49e1-aa8c-75a73af430a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:27 crc kubenswrapper[4787]: I0127 08:09:27.599908 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-kube-api-access-hllsc" (OuterVolumeSpecName: "kube-api-access-hllsc") pod "8d295e1a-f5ae-49e1-aa8c-75a73af430a4" (UID: "8d295e1a-f5ae-49e1-aa8c-75a73af430a4"). InnerVolumeSpecName "kube-api-access-hllsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:27 crc kubenswrapper[4787]: I0127 08:09:27.693273 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:27 crc kubenswrapper[4787]: I0127 08:09:27.693334 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hllsc\" (UniqueName: \"kubernetes.io/projected/8d295e1a-f5ae-49e1-aa8c-75a73af430a4-kube-api-access-hllsc\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:28 crc kubenswrapper[4787]: I0127 08:09:28.056295 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-c6t44" event={"ID":"8d295e1a-f5ae-49e1-aa8c-75a73af430a4","Type":"ContainerDied","Data":"55910de522cd1c628eb820391f2a22eac660770fb65284eb1346248d992f8f13"} Jan 27 08:09:28 crc kubenswrapper[4787]: I0127 08:09:28.056336 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-c6t44" Jan 27 08:09:28 crc kubenswrapper[4787]: I0127 08:09:28.056350 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55910de522cd1c628eb820391f2a22eac660770fb65284eb1346248d992f8f13" Jan 27 08:09:32 crc kubenswrapper[4787]: I0127 08:09:32.946164 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-c6t44"] Jan 27 08:09:32 crc kubenswrapper[4787]: I0127 08:09:32.953193 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-c6t44"] Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.037859 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-b4lsd"] Jan 27 08:09:33 crc kubenswrapper[4787]: E0127 08:09:33.038256 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d295e1a-f5ae-49e1-aa8c-75a73af430a4" containerName="mariadb-account-create-update" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.038281 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d295e1a-f5ae-49e1-aa8c-75a73af430a4" containerName="mariadb-account-create-update" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.038494 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d295e1a-f5ae-49e1-aa8c-75a73af430a4" containerName="mariadb-account-create-update" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.039173 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.041358 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.049685 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-b4lsd"] Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.092660 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3150913-c395-4ed0-a5d3-616426f58671-operator-scripts\") pod \"root-account-create-update-b4lsd\" (UID: \"c3150913-c395-4ed0-a5d3-616426f58671\") " pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.092864 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2cg2\" (UniqueName: \"kubernetes.io/projected/c3150913-c395-4ed0-a5d3-616426f58671-kube-api-access-f2cg2\") pod \"root-account-create-update-b4lsd\" (UID: \"c3150913-c395-4ed0-a5d3-616426f58671\") " pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.100041 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d295e1a-f5ae-49e1-aa8c-75a73af430a4" path="/var/lib/kubelet/pods/8d295e1a-f5ae-49e1-aa8c-75a73af430a4/volumes" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.194930 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3150913-c395-4ed0-a5d3-616426f58671-operator-scripts\") pod \"root-account-create-update-b4lsd\" (UID: \"c3150913-c395-4ed0-a5d3-616426f58671\") " pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.195004 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2cg2\" (UniqueName: \"kubernetes.io/projected/c3150913-c395-4ed0-a5d3-616426f58671-kube-api-access-f2cg2\") pod \"root-account-create-update-b4lsd\" (UID: \"c3150913-c395-4ed0-a5d3-616426f58671\") " pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.196189 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3150913-c395-4ed0-a5d3-616426f58671-operator-scripts\") pod \"root-account-create-update-b4lsd\" (UID: \"c3150913-c395-4ed0-a5d3-616426f58671\") " pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.221651 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2cg2\" (UniqueName: \"kubernetes.io/projected/c3150913-c395-4ed0-a5d3-616426f58671-kube-api-access-f2cg2\") pod \"root-account-create-update-b4lsd\" (UID: \"c3150913-c395-4ed0-a5d3-616426f58671\") " pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.365836 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:33 crc kubenswrapper[4787]: I0127 08:09:33.839229 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-b4lsd"] Jan 27 08:09:33 crc kubenswrapper[4787]: W0127 08:09:33.845366 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3150913_c395_4ed0_a5d3_616426f58671.slice/crio-a7da237db8236aafbc9b1a0633c16718a4e8ac2260b89cb17571476da9604d22 WatchSource:0}: Error finding container a7da237db8236aafbc9b1a0633c16718a4e8ac2260b89cb17571476da9604d22: Status 404 returned error can't find the container with id a7da237db8236aafbc9b1a0633c16718a4e8ac2260b89cb17571476da9604d22 Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.116460 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-b4lsd" event={"ID":"c3150913-c395-4ed0-a5d3-616426f58671","Type":"ContainerStarted","Data":"b5b33f88492ce6991e1e36a7024a76b4049def405dfe4dc15a1159b39d6f6633"} Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.116866 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-b4lsd" event={"ID":"c3150913-c395-4ed0-a5d3-616426f58671","Type":"ContainerStarted","Data":"a7da237db8236aafbc9b1a0633c16718a4e8ac2260b89cb17571476da9604d22"} Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.137061 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/root-account-create-update-b4lsd" podStartSLOduration=1.137032 podStartE2EDuration="1.137032s" podCreationTimestamp="2026-01-27 08:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:34.133194951 +0000 UTC m=+1079.785550453" watchObservedRunningTime="2026-01-27 08:09:34.137032 +0000 UTC m=+1079.789387492" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.226311 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-wjmqh"] Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.227460 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.237971 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-wjmqh"] Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.317934 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-operator-scripts\") pod \"keystone-db-create-wjmqh\" (UID: \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\") " pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.318010 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m6pj\" (UniqueName: \"kubernetes.io/projected/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-kube-api-access-8m6pj\") pod \"keystone-db-create-wjmqh\" (UID: \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\") " pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.343183 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-5828-account-create-update-xh7r6"] Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.344470 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.347084 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.359532 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-5828-account-create-update-xh7r6"] Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.420587 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhst\" (UniqueName: \"kubernetes.io/projected/646de171-9e06-4df6-9e1e-07319c04b95c-kube-api-access-6mhst\") pod \"keystone-5828-account-create-update-xh7r6\" (UID: \"646de171-9e06-4df6-9e1e-07319c04b95c\") " pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.420664 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-operator-scripts\") pod \"keystone-db-create-wjmqh\" (UID: \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\") " pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.420695 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646de171-9e06-4df6-9e1e-07319c04b95c-operator-scripts\") pod \"keystone-5828-account-create-update-xh7r6\" (UID: \"646de171-9e06-4df6-9e1e-07319c04b95c\") " pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.420735 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m6pj\" (UniqueName: \"kubernetes.io/projected/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-kube-api-access-8m6pj\") pod \"keystone-db-create-wjmqh\" (UID: \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\") " pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.421358 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-operator-scripts\") pod \"keystone-db-create-wjmqh\" (UID: \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\") " pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.445188 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m6pj\" (UniqueName: \"kubernetes.io/projected/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-kube-api-access-8m6pj\") pod \"keystone-db-create-wjmqh\" (UID: \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\") " pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.521344 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mhst\" (UniqueName: \"kubernetes.io/projected/646de171-9e06-4df6-9e1e-07319c04b95c-kube-api-access-6mhst\") pod \"keystone-5828-account-create-update-xh7r6\" (UID: \"646de171-9e06-4df6-9e1e-07319c04b95c\") " pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.521417 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646de171-9e06-4df6-9e1e-07319c04b95c-operator-scripts\") pod \"keystone-5828-account-create-update-xh7r6\" (UID: \"646de171-9e06-4df6-9e1e-07319c04b95c\") " pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.523239 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646de171-9e06-4df6-9e1e-07319c04b95c-operator-scripts\") pod \"keystone-5828-account-create-update-xh7r6\" (UID: \"646de171-9e06-4df6-9e1e-07319c04b95c\") " pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.540269 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-8rt5v"] Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.542140 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.553967 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-8rt5v"] Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.559588 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mhst\" (UniqueName: \"kubernetes.io/projected/646de171-9e06-4df6-9e1e-07319c04b95c-kube-api-access-6mhst\") pod \"keystone-5828-account-create-update-xh7r6\" (UID: \"646de171-9e06-4df6-9e1e-07319c04b95c\") " pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.610076 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.622510 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htzmc\" (UniqueName: \"kubernetes.io/projected/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-kube-api-access-htzmc\") pod \"placement-db-create-8rt5v\" (UID: \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\") " pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.622622 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-operator-scripts\") pod \"placement-db-create-8rt5v\" (UID: \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\") " pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.640425 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-6d8d-account-create-update-5jvm9"] Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.641701 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.646836 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.652506 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-6d8d-account-create-update-5jvm9"] Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.664895 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.724093 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htzmc\" (UniqueName: \"kubernetes.io/projected/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-kube-api-access-htzmc\") pod \"placement-db-create-8rt5v\" (UID: \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\") " pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.724211 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-operator-scripts\") pod \"placement-db-create-8rt5v\" (UID: \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\") " pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.725231 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-operator-scripts\") pod \"placement-db-create-8rt5v\" (UID: \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\") " pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.752913 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htzmc\" (UniqueName: \"kubernetes.io/projected/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-kube-api-access-htzmc\") pod \"placement-db-create-8rt5v\" (UID: \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\") " pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.828020 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-operator-scripts\") pod \"placement-6d8d-account-create-update-5jvm9\" (UID: \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\") " pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.828194 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fcz\" (UniqueName: \"kubernetes.io/projected/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-kube-api-access-49fcz\") pod \"placement-6d8d-account-create-update-5jvm9\" (UID: \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\") " pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.894400 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.935658 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49fcz\" (UniqueName: \"kubernetes.io/projected/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-kube-api-access-49fcz\") pod \"placement-6d8d-account-create-update-5jvm9\" (UID: \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\") " pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.935753 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-operator-scripts\") pod \"placement-6d8d-account-create-update-5jvm9\" (UID: \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\") " pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.937258 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-operator-scripts\") pod \"placement-6d8d-account-create-update-5jvm9\" (UID: \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\") " pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:34 crc kubenswrapper[4787]: I0127 08:09:34.958133 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49fcz\" (UniqueName: \"kubernetes.io/projected/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-kube-api-access-49fcz\") pod \"placement-6d8d-account-create-update-5jvm9\" (UID: \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\") " pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.041368 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-5828-account-create-update-xh7r6"] Jan 27 08:09:35 crc kubenswrapper[4787]: W0127 08:09:35.043005 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod646de171_9e06_4df6_9e1e_07319c04b95c.slice/crio-61ebf61c0ee28539578df85a272d783dc4becc606055385faa63ee88cef6d0cf WatchSource:0}: Error finding container 61ebf61c0ee28539578df85a272d783dc4becc606055385faa63ee88cef6d0cf: Status 404 returned error can't find the container with id 61ebf61c0ee28539578df85a272d783dc4becc606055385faa63ee88cef6d0cf Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.047265 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.071823 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.107023 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-wjmqh"] Jan 27 08:09:35 crc kubenswrapper[4787]: W0127 08:09:35.163283 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod205e2c3c_82ef_4aaa_b1b3_0465a855bde9.slice/crio-3693ebfab339d06c27513a356c82a28e6e678e9480e3c3b9fac9bc0c2d5ed91b WatchSource:0}: Error finding container 3693ebfab339d06c27513a356c82a28e6e678e9480e3c3b9fac9bc0c2d5ed91b: Status 404 returned error can't find the container with id 3693ebfab339d06c27513a356c82a28e6e678e9480e3c3b9fac9bc0c2d5ed91b Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.164057 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" event={"ID":"646de171-9e06-4df6-9e1e-07319c04b95c","Type":"ContainerStarted","Data":"61ebf61c0ee28539578df85a272d783dc4becc606055385faa63ee88cef6d0cf"} Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.171076 4787 generic.go:334] "Generic (PLEG): container finished" podID="c3150913-c395-4ed0-a5d3-616426f58671" containerID="b5b33f88492ce6991e1e36a7024a76b4049def405dfe4dc15a1159b39d6f6633" exitCode=0 Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.171139 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-b4lsd" event={"ID":"c3150913-c395-4ed0-a5d3-616426f58671","Type":"ContainerDied","Data":"b5b33f88492ce6991e1e36a7024a76b4049def405dfe4dc15a1159b39d6f6633"} Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.344049 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-8rt5v"] Jan 27 08:09:35 crc kubenswrapper[4787]: W0127 08:09:35.347317 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ba9e1b4_f385_47f9_80f6_508a3406aa2a.slice/crio-644b272208b8e1b224dfece73de7f0ce1f7d4fb488f30fb14df01a26475a730e WatchSource:0}: Error finding container 644b272208b8e1b224dfece73de7f0ce1f7d4fb488f30fb14df01a26475a730e: Status 404 returned error can't find the container with id 644b272208b8e1b224dfece73de7f0ce1f7d4fb488f30fb14df01a26475a730e Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.518253 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-6d8d-account-create-update-5jvm9"] Jan 27 08:09:35 crc kubenswrapper[4787]: I0127 08:09:35.530024 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.186791 4787 generic.go:334] "Generic (PLEG): container finished" podID="7ba9e1b4-f385-47f9-80f6-508a3406aa2a" containerID="9ab563345b762a72fbe3b47df8d23be9db6758acc4bcc324921200a4b4455e30" exitCode=0 Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.186901 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-8rt5v" event={"ID":"7ba9e1b4-f385-47f9-80f6-508a3406aa2a","Type":"ContainerDied","Data":"9ab563345b762a72fbe3b47df8d23be9db6758acc4bcc324921200a4b4455e30"} Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.186946 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-8rt5v" event={"ID":"7ba9e1b4-f385-47f9-80f6-508a3406aa2a","Type":"ContainerStarted","Data":"644b272208b8e1b224dfece73de7f0ce1f7d4fb488f30fb14df01a26475a730e"} Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.188877 4787 generic.go:334] "Generic (PLEG): container finished" podID="646de171-9e06-4df6-9e1e-07319c04b95c" containerID="a478f74f2f9db06c59e7ba51e2be735e9b9fb9d652017f85e3a9daea1ac8536e" exitCode=0 Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.188995 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" event={"ID":"646de171-9e06-4df6-9e1e-07319c04b95c","Type":"ContainerDied","Data":"a478f74f2f9db06c59e7ba51e2be735e9b9fb9d652017f85e3a9daea1ac8536e"} Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.191136 4787 generic.go:334] "Generic (PLEG): container finished" podID="205e2c3c-82ef-4aaa-b1b3-0465a855bde9" containerID="2e3468c57e9613d84bee598f4f0c5a4b7e44d892a408fe4d11cddeffe9d5b60b" exitCode=0 Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.191175 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-wjmqh" event={"ID":"205e2c3c-82ef-4aaa-b1b3-0465a855bde9","Type":"ContainerDied","Data":"2e3468c57e9613d84bee598f4f0c5a4b7e44d892a408fe4d11cddeffe9d5b60b"} Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.191222 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-wjmqh" event={"ID":"205e2c3c-82ef-4aaa-b1b3-0465a855bde9","Type":"ContainerStarted","Data":"3693ebfab339d06c27513a356c82a28e6e678e9480e3c3b9fac9bc0c2d5ed91b"} Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.192925 4787 generic.go:334] "Generic (PLEG): container finished" podID="2e3d75aa-1156-48f5-9435-b8b3cd2ec555" containerID="3f06a186b769d4a1f5e238fd19184a35781ff17d23f2779b2c34abe34bf41b44" exitCode=0 Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.192952 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" event={"ID":"2e3d75aa-1156-48f5-9435-b8b3cd2ec555","Type":"ContainerDied","Data":"3f06a186b769d4a1f5e238fd19184a35781ff17d23f2779b2c34abe34bf41b44"} Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.192987 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" event={"ID":"2e3d75aa-1156-48f5-9435-b8b3cd2ec555","Type":"ContainerStarted","Data":"b404522cda13cd80426b31e5ac61fc611fc37a31277fcf44a25db1d76987c6f3"} Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.523195 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.578934 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2cg2\" (UniqueName: \"kubernetes.io/projected/c3150913-c395-4ed0-a5d3-616426f58671-kube-api-access-f2cg2\") pod \"c3150913-c395-4ed0-a5d3-616426f58671\" (UID: \"c3150913-c395-4ed0-a5d3-616426f58671\") " Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.579076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3150913-c395-4ed0-a5d3-616426f58671-operator-scripts\") pod \"c3150913-c395-4ed0-a5d3-616426f58671\" (UID: \"c3150913-c395-4ed0-a5d3-616426f58671\") " Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.580485 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3150913-c395-4ed0-a5d3-616426f58671-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3150913-c395-4ed0-a5d3-616426f58671" (UID: "c3150913-c395-4ed0-a5d3-616426f58671"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.588143 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3150913-c395-4ed0-a5d3-616426f58671-kube-api-access-f2cg2" (OuterVolumeSpecName: "kube-api-access-f2cg2") pod "c3150913-c395-4ed0-a5d3-616426f58671" (UID: "c3150913-c395-4ed0-a5d3-616426f58671"). InnerVolumeSpecName "kube-api-access-f2cg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.682961 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2cg2\" (UniqueName: \"kubernetes.io/projected/c3150913-c395-4ed0-a5d3-616426f58671-kube-api-access-f2cg2\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4787]: I0127 08:09:36.683024 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3150913-c395-4ed0-a5d3-616426f58671-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.204163 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-b4lsd" event={"ID":"c3150913-c395-4ed0-a5d3-616426f58671","Type":"ContainerDied","Data":"a7da237db8236aafbc9b1a0633c16718a4e8ac2260b89cb17571476da9604d22"} Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.204233 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7da237db8236aafbc9b1a0633c16718a4e8ac2260b89cb17571476da9604d22" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.204567 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-b4lsd" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.597793 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.626439 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49fcz\" (UniqueName: \"kubernetes.io/projected/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-kube-api-access-49fcz\") pod \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\" (UID: \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\") " Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.626648 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-operator-scripts\") pod \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\" (UID: \"2e3d75aa-1156-48f5-9435-b8b3cd2ec555\") " Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.628031 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e3d75aa-1156-48f5-9435-b8b3cd2ec555" (UID: "2e3d75aa-1156-48f5-9435-b8b3cd2ec555"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.635433 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-kube-api-access-49fcz" (OuterVolumeSpecName: "kube-api-access-49fcz") pod "2e3d75aa-1156-48f5-9435-b8b3cd2ec555" (UID: "2e3d75aa-1156-48f5-9435-b8b3cd2ec555"). InnerVolumeSpecName "kube-api-access-49fcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.727964 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49fcz\" (UniqueName: \"kubernetes.io/projected/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-kube-api-access-49fcz\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.728010 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e3d75aa-1156-48f5-9435-b8b3cd2ec555-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.735155 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.743474 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.761765 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.830202 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m6pj\" (UniqueName: \"kubernetes.io/projected/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-kube-api-access-8m6pj\") pod \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\" (UID: \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\") " Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.830354 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-operator-scripts\") pod \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\" (UID: \"205e2c3c-82ef-4aaa-b1b3-0465a855bde9\") " Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.830393 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646de171-9e06-4df6-9e1e-07319c04b95c-operator-scripts\") pod \"646de171-9e06-4df6-9e1e-07319c04b95c\" (UID: \"646de171-9e06-4df6-9e1e-07319c04b95c\") " Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.830460 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htzmc\" (UniqueName: \"kubernetes.io/projected/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-kube-api-access-htzmc\") pod \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\" (UID: \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\") " Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.830501 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mhst\" (UniqueName: \"kubernetes.io/projected/646de171-9e06-4df6-9e1e-07319c04b95c-kube-api-access-6mhst\") pod \"646de171-9e06-4df6-9e1e-07319c04b95c\" (UID: \"646de171-9e06-4df6-9e1e-07319c04b95c\") " Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.830581 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-operator-scripts\") pod \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\" (UID: \"7ba9e1b4-f385-47f9-80f6-508a3406aa2a\") " Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.831438 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "205e2c3c-82ef-4aaa-b1b3-0465a855bde9" (UID: "205e2c3c-82ef-4aaa-b1b3-0465a855bde9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.831483 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646de171-9e06-4df6-9e1e-07319c04b95c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "646de171-9e06-4df6-9e1e-07319c04b95c" (UID: "646de171-9e06-4df6-9e1e-07319c04b95c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.831687 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ba9e1b4-f385-47f9-80f6-508a3406aa2a" (UID: "7ba9e1b4-f385-47f9-80f6-508a3406aa2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.834635 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-kube-api-access-8m6pj" (OuterVolumeSpecName: "kube-api-access-8m6pj") pod "205e2c3c-82ef-4aaa-b1b3-0465a855bde9" (UID: "205e2c3c-82ef-4aaa-b1b3-0465a855bde9"). InnerVolumeSpecName "kube-api-access-8m6pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.834869 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646de171-9e06-4df6-9e1e-07319c04b95c-kube-api-access-6mhst" (OuterVolumeSpecName: "kube-api-access-6mhst") pod "646de171-9e06-4df6-9e1e-07319c04b95c" (UID: "646de171-9e06-4df6-9e1e-07319c04b95c"). InnerVolumeSpecName "kube-api-access-6mhst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.834938 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-kube-api-access-htzmc" (OuterVolumeSpecName: "kube-api-access-htzmc") pod "7ba9e1b4-f385-47f9-80f6-508a3406aa2a" (UID: "7ba9e1b4-f385-47f9-80f6-508a3406aa2a"). InnerVolumeSpecName "kube-api-access-htzmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.932819 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htzmc\" (UniqueName: \"kubernetes.io/projected/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-kube-api-access-htzmc\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.932868 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mhst\" (UniqueName: \"kubernetes.io/projected/646de171-9e06-4df6-9e1e-07319c04b95c-kube-api-access-6mhst\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.932879 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ba9e1b4-f385-47f9-80f6-508a3406aa2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.932891 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m6pj\" (UniqueName: \"kubernetes.io/projected/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-kube-api-access-8m6pj\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.932907 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e2c3c-82ef-4aaa-b1b3-0465a855bde9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:37 crc kubenswrapper[4787]: I0127 08:09:37.932918 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646de171-9e06-4df6-9e1e-07319c04b95c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.213868 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-8rt5v" event={"ID":"7ba9e1b4-f385-47f9-80f6-508a3406aa2a","Type":"ContainerDied","Data":"644b272208b8e1b224dfece73de7f0ce1f7d4fb488f30fb14df01a26475a730e"} Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.213922 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="644b272208b8e1b224dfece73de7f0ce1f7d4fb488f30fb14df01a26475a730e" Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.213950 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-8rt5v" Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.216437 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" event={"ID":"646de171-9e06-4df6-9e1e-07319c04b95c","Type":"ContainerDied","Data":"61ebf61c0ee28539578df85a272d783dc4becc606055385faa63ee88cef6d0cf"} Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.216474 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-5828-account-create-update-xh7r6" Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.216494 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ebf61c0ee28539578df85a272d783dc4becc606055385faa63ee88cef6d0cf" Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.223204 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-wjmqh" event={"ID":"205e2c3c-82ef-4aaa-b1b3-0465a855bde9","Type":"ContainerDied","Data":"3693ebfab339d06c27513a356c82a28e6e678e9480e3c3b9fac9bc0c2d5ed91b"} Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.223277 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3693ebfab339d06c27513a356c82a28e6e678e9480e3c3b9fac9bc0c2d5ed91b" Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.223420 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-wjmqh" Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.229401 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" event={"ID":"2e3d75aa-1156-48f5-9435-b8b3cd2ec555","Type":"ContainerDied","Data":"b404522cda13cd80426b31e5ac61fc611fc37a31277fcf44a25db1d76987c6f3"} Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.229491 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b404522cda13cd80426b31e5ac61fc611fc37a31277fcf44a25db1d76987c6f3" Jan 27 08:09:38 crc kubenswrapper[4787]: I0127 08:09:38.229656 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-6d8d-account-create-update-5jvm9" Jan 27 08:09:42 crc kubenswrapper[4787]: I0127 08:09:42.261879 4787 generic.go:334] "Generic (PLEG): container finished" podID="f84bfff4-b4f2-40d8-8b81-a9e5eb776442" containerID="b35fd8bea3d2d9361614b10dbc01f3a16983ab9d5a38a13fd4df6390c81bb921" exitCode=0 Jan 27 08:09:42 crc kubenswrapper[4787]: I0127 08:09:42.262068 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"f84bfff4-b4f2-40d8-8b81-a9e5eb776442","Type":"ContainerDied","Data":"b35fd8bea3d2d9361614b10dbc01f3a16983ab9d5a38a13fd4df6390c81bb921"} Jan 27 08:09:42 crc kubenswrapper[4787]: I0127 08:09:42.268575 4787 generic.go:334] "Generic (PLEG): container finished" podID="1d9a6f95-2510-4e74-b4f7-fb592d761c91" containerID="013b2f12459c08307712f9e501100fc81d14973b010bcdc9c24006a8c2afc6b8" exitCode=0 Jan 27 08:09:42 crc kubenswrapper[4787]: I0127 08:09:42.268645 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"1d9a6f95-2510-4e74-b4f7-fb592d761c91","Type":"ContainerDied","Data":"013b2f12459c08307712f9e501100fc81d14973b010bcdc9c24006a8c2afc6b8"} Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.279309 4787 generic.go:334] "Generic (PLEG): container finished" podID="faff0a15-880a-4cf7-a0e0-81d573ace274" containerID="e5125bf2a44027a200b519f739fdd0eef6bd684deedf878b3831835beda70a71" exitCode=0 Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.279398 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"faff0a15-880a-4cf7-a0e0-81d573ace274","Type":"ContainerDied","Data":"e5125bf2a44027a200b519f739fdd0eef6bd684deedf878b3831835beda70a71"} Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.289460 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"1d9a6f95-2510-4e74-b4f7-fb592d761c91","Type":"ContainerStarted","Data":"41a8905f95161f2a326d5047b8d1d059604e94f798b145536a28b7152f8c2dae"} Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.290689 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.297212 4787 generic.go:334] "Generic (PLEG): container finished" podID="68be2504-1de8-427c-9937-bd0428f7c5c4" containerID="60be7c927e866796541fb6c53b2b04db37adbfe1b02bf03938d89ea5c5370982" exitCode=0 Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.297326 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"68be2504-1de8-427c-9937-bd0428f7c5c4","Type":"ContainerDied","Data":"60be7c927e866796541fb6c53b2b04db37adbfe1b02bf03938d89ea5c5370982"} Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.301460 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"f84bfff4-b4f2-40d8-8b81-a9e5eb776442","Type":"ContainerStarted","Data":"61897f0c250cb27f30a8bbf48aae1210700d43e93c5aad704d626be0a37a1f7e"} Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.302333 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.344312 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=38.044440905 podStartE2EDuration="52.34428576s" podCreationTimestamp="2026-01-27 08:08:51 +0000 UTC" firstStartedPulling="2026-01-27 08:08:53.552885947 +0000 UTC m=+1039.205241429" lastFinishedPulling="2026-01-27 08:09:07.852730792 +0000 UTC m=+1053.505086284" observedRunningTime="2026-01-27 08:09:43.336260464 +0000 UTC m=+1088.988615976" watchObservedRunningTime="2026-01-27 08:09:43.34428576 +0000 UTC m=+1088.996641312" Jan 27 08:09:43 crc kubenswrapper[4787]: I0127 08:09:43.398487 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=38.174741592 podStartE2EDuration="51.398453762s" podCreationTimestamp="2026-01-27 08:08:52 +0000 UTC" firstStartedPulling="2026-01-27 08:08:54.675043597 +0000 UTC m=+1040.327399089" lastFinishedPulling="2026-01-27 08:09:07.898755767 +0000 UTC m=+1053.551111259" observedRunningTime="2026-01-27 08:09:43.394397118 +0000 UTC m=+1089.046752620" watchObservedRunningTime="2026-01-27 08:09:43.398453762 +0000 UTC m=+1089.050809254" Jan 27 08:09:44 crc kubenswrapper[4787]: I0127 08:09:44.316536 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"faff0a15-880a-4cf7-a0e0-81d573ace274","Type":"ContainerStarted","Data":"912d2fdf7d631c89ca941a924532475a7eb85dfa59356b1adf65daa134ee919f"} Jan 27 08:09:44 crc kubenswrapper[4787]: I0127 08:09:44.319865 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"68be2504-1de8-427c-9937-bd0428f7c5c4","Type":"ContainerStarted","Data":"86d9ea0f6998d29e24154814c190683b5445a6bb1e46dece11aa76d82707ed56"} Jan 27 08:09:44 crc kubenswrapper[4787]: I0127 08:09:44.348633 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=38.661062162 podStartE2EDuration="52.348611863s" podCreationTimestamp="2026-01-27 08:08:52 +0000 UTC" firstStartedPulling="2026-01-27 08:08:54.167359451 +0000 UTC m=+1039.819714943" lastFinishedPulling="2026-01-27 08:09:07.854909152 +0000 UTC m=+1053.507264644" observedRunningTime="2026-01-27 08:09:44.339574714 +0000 UTC m=+1089.991930236" watchObservedRunningTime="2026-01-27 08:09:44.348611863 +0000 UTC m=+1090.000967355" Jan 27 08:09:44 crc kubenswrapper[4787]: I0127 08:09:44.369768 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=39.198435332 podStartE2EDuration="53.369739432s" podCreationTimestamp="2026-01-27 08:08:51 +0000 UTC" firstStartedPulling="2026-01-27 08:08:53.655707558 +0000 UTC m=+1039.308063050" lastFinishedPulling="2026-01-27 08:09:07.827011658 +0000 UTC m=+1053.479367150" observedRunningTime="2026-01-27 08:09:44.365186917 +0000 UTC m=+1090.017542419" watchObservedRunningTime="2026-01-27 08:09:44.369739432 +0000 UTC m=+1090.022094924" Jan 27 08:09:53 crc kubenswrapper[4787]: I0127 08:09:53.094785 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-server-0" podUID="f84bfff4-b4f2-40d8-8b81-a9e5eb776442" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5672: connect: connection refused" Jan 27 08:09:53 crc kubenswrapper[4787]: I0127 08:09:53.383437 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:09:53 crc kubenswrapper[4787]: I0127 08:09:53.384683 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podUID="68be2504-1de8-427c-9937-bd0428f7c5c4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5672: connect: connection refused" Jan 27 08:09:53 crc kubenswrapper[4787]: I0127 08:09:53.385376 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podUID="68be2504-1de8-427c-9937-bd0428f7c5c4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5672: connect: connection refused" Jan 27 08:09:53 crc kubenswrapper[4787]: I0127 08:09:53.689625 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:09:53 crc kubenswrapper[4787]: I0127 08:09:53.691534 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podUID="faff0a15-880a-4cf7-a0e0-81d573ace274" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5672: connect: connection refused" Jan 27 08:09:53 crc kubenswrapper[4787]: I0127 08:09:53.691936 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podUID="faff0a15-880a-4cf7-a0e0-81d573ace274" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5672: connect: connection refused" Jan 27 08:09:54 crc kubenswrapper[4787]: I0127 08:09:54.039733 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-notifications-server-0" podUID="1d9a6f95-2510-4e74-b4f7-fb592d761c91" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5672: connect: connection refused" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.093519 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.385776 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.690738 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.777391 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-ztn8q"] Jan 27 08:10:03 crc kubenswrapper[4787]: E0127 08:10:03.777892 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3d75aa-1156-48f5-9435-b8b3cd2ec555" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.777919 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3d75aa-1156-48f5-9435-b8b3cd2ec555" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: E0127 08:10:03.777941 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3150913-c395-4ed0-a5d3-616426f58671" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.777950 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3150913-c395-4ed0-a5d3-616426f58671" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: E0127 08:10:03.777974 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205e2c3c-82ef-4aaa-b1b3-0465a855bde9" containerName="mariadb-database-create" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.777981 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="205e2c3c-82ef-4aaa-b1b3-0465a855bde9" containerName="mariadb-database-create" Jan 27 08:10:03 crc kubenswrapper[4787]: E0127 08:10:03.777998 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646de171-9e06-4df6-9e1e-07319c04b95c" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.778005 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="646de171-9e06-4df6-9e1e-07319c04b95c" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: E0127 08:10:03.778018 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ba9e1b4-f385-47f9-80f6-508a3406aa2a" containerName="mariadb-database-create" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.778025 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ba9e1b4-f385-47f9-80f6-508a3406aa2a" containerName="mariadb-database-create" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.778224 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="205e2c3c-82ef-4aaa-b1b3-0465a855bde9" containerName="mariadb-database-create" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.778243 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3150913-c395-4ed0-a5d3-616426f58671" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.778256 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="646de171-9e06-4df6-9e1e-07319c04b95c" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.778272 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba9e1b4-f385-47f9-80f6-508a3406aa2a" containerName="mariadb-database-create" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.778284 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3d75aa-1156-48f5-9435-b8b3cd2ec555" containerName="mariadb-account-create-update" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.779156 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.782367 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.782652 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.782852 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-z2dpt" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.782905 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.785517 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-ztn8q"] Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.875648 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nn7r\" (UniqueName: \"kubernetes.io/projected/c5da7883-eba0-4fa4-92c5-9085b0ce0090-kube-api-access-5nn7r\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.875783 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-config-data\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.875875 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-combined-ca-bundle\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.977709 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-config-data\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.977815 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-combined-ca-bundle\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.977878 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nn7r\" (UniqueName: \"kubernetes.io/projected/c5da7883-eba0-4fa4-92c5-9085b0ce0090-kube-api-access-5nn7r\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.986212 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-combined-ca-bundle\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:03 crc kubenswrapper[4787]: I0127 08:10:03.986262 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-config-data\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:04 crc kubenswrapper[4787]: I0127 08:10:04.001819 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nn7r\" (UniqueName: \"kubernetes.io/projected/c5da7883-eba0-4fa4-92c5-9085b0ce0090-kube-api-access-5nn7r\") pod \"keystone-db-sync-ztn8q\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:04 crc kubenswrapper[4787]: I0127 08:10:04.038144 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-notifications-server-0" Jan 27 08:10:04 crc kubenswrapper[4787]: I0127 08:10:04.101192 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:04 crc kubenswrapper[4787]: I0127 08:10:04.611519 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-ztn8q"] Jan 27 08:10:05 crc kubenswrapper[4787]: I0127 08:10:05.495995 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-ztn8q" event={"ID":"c5da7883-eba0-4fa4-92c5-9085b0ce0090","Type":"ContainerStarted","Data":"577257a6a0504fab18c8af07f7f4643b2ff43f7abb2b83cd8e0f2644b25042a5"} Jan 27 08:10:12 crc kubenswrapper[4787]: I0127 08:10:12.608679 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-ztn8q" event={"ID":"c5da7883-eba0-4fa4-92c5-9085b0ce0090","Type":"ContainerStarted","Data":"fdbb6282b854859c8ab712b9eae897a43f0552adf90b843b092e2da77b58ae00"} Jan 27 08:10:15 crc kubenswrapper[4787]: I0127 08:10:15.636440 4787 generic.go:334] "Generic (PLEG): container finished" podID="c5da7883-eba0-4fa4-92c5-9085b0ce0090" containerID="fdbb6282b854859c8ab712b9eae897a43f0552adf90b843b092e2da77b58ae00" exitCode=0 Jan 27 08:10:15 crc kubenswrapper[4787]: I0127 08:10:15.636533 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-ztn8q" event={"ID":"c5da7883-eba0-4fa4-92c5-9085b0ce0090","Type":"ContainerDied","Data":"fdbb6282b854859c8ab712b9eae897a43f0552adf90b843b092e2da77b58ae00"} Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.026044 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.133064 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nn7r\" (UniqueName: \"kubernetes.io/projected/c5da7883-eba0-4fa4-92c5-9085b0ce0090-kube-api-access-5nn7r\") pod \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.133157 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-config-data\") pod \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.133409 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-combined-ca-bundle\") pod \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\" (UID: \"c5da7883-eba0-4fa4-92c5-9085b0ce0090\") " Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.140645 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5da7883-eba0-4fa4-92c5-9085b0ce0090-kube-api-access-5nn7r" (OuterVolumeSpecName: "kube-api-access-5nn7r") pod "c5da7883-eba0-4fa4-92c5-9085b0ce0090" (UID: "c5da7883-eba0-4fa4-92c5-9085b0ce0090"). InnerVolumeSpecName "kube-api-access-5nn7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.157285 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5da7883-eba0-4fa4-92c5-9085b0ce0090" (UID: "c5da7883-eba0-4fa4-92c5-9085b0ce0090"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.179940 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-config-data" (OuterVolumeSpecName: "config-data") pod "c5da7883-eba0-4fa4-92c5-9085b0ce0090" (UID: "c5da7883-eba0-4fa4-92c5-9085b0ce0090"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.235649 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.235694 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nn7r\" (UniqueName: \"kubernetes.io/projected/c5da7883-eba0-4fa4-92c5-9085b0ce0090-kube-api-access-5nn7r\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.235707 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5da7883-eba0-4fa4-92c5-9085b0ce0090-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.663363 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-ztn8q" event={"ID":"c5da7883-eba0-4fa4-92c5-9085b0ce0090","Type":"ContainerDied","Data":"577257a6a0504fab18c8af07f7f4643b2ff43f7abb2b83cd8e0f2644b25042a5"} Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.663423 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-ztn8q" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.663446 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="577257a6a0504fab18c8af07f7f4643b2ff43f7abb2b83cd8e0f2644b25042a5" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.891249 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gzqt8"] Jan 27 08:10:17 crc kubenswrapper[4787]: E0127 08:10:17.891743 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5da7883-eba0-4fa4-92c5-9085b0ce0090" containerName="keystone-db-sync" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.891768 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5da7883-eba0-4fa4-92c5-9085b0ce0090" containerName="keystone-db-sync" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.891997 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5da7883-eba0-4fa4-92c5-9085b0ce0090" containerName="keystone-db-sync" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.892838 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.904313 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.904591 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.904757 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-z2dpt" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.904792 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.904893 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.910288 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gzqt8"] Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.946928 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-fernet-keys\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.946979 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-credential-keys\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.946997 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-combined-ca-bundle\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.947038 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn7jh\" (UniqueName: \"kubernetes.io/projected/25481a95-2112-4569-8db7-38a3bc2f2137-kube-api-access-bn7jh\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.947293 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-config-data\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:17 crc kubenswrapper[4787]: I0127 08:10:17.947452 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-scripts\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.049383 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-config-data\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.049472 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-scripts\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.049784 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-fernet-keys\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.049806 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-credential-keys\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.049823 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-combined-ca-bundle\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.049876 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn7jh\" (UniqueName: \"kubernetes.io/projected/25481a95-2112-4569-8db7-38a3bc2f2137-kube-api-access-bn7jh\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.055543 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-combined-ca-bundle\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.056876 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-config-data\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.062175 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-scripts\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.079330 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-credential-keys\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.089423 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-fernet-keys\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.102304 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn7jh\" (UniqueName: \"kubernetes.io/projected/25481a95-2112-4569-8db7-38a3bc2f2137-kube-api-access-bn7jh\") pod \"keystone-bootstrap-gzqt8\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.149595 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-7x42z"] Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.151074 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.170829 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.171289 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.182162 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-7wn8v" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.185644 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-7x42z"] Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.222020 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.254767 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-logs\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.255206 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-combined-ca-bundle\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.255313 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-config-data\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.255433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pfpx\" (UniqueName: \"kubernetes.io/projected/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-kube-api-access-5pfpx\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.255515 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-scripts\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.357803 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-logs\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.358311 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-combined-ca-bundle\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.358346 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-config-data\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.358403 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pfpx\" (UniqueName: \"kubernetes.io/projected/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-kube-api-access-5pfpx\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.358426 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-scripts\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.363304 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-scripts\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.363331 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-logs\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.366283 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-combined-ca-bundle\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.367389 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-config-data\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.395386 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pfpx\" (UniqueName: \"kubernetes.io/projected/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-kube-api-access-5pfpx\") pod \"placement-db-sync-7x42z\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.494594 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:18 crc kubenswrapper[4787]: I0127 08:10:18.799937 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gzqt8"] Jan 27 08:10:19 crc kubenswrapper[4787]: I0127 08:10:19.004326 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-7x42z"] Jan 27 08:10:19 crc kubenswrapper[4787]: I0127 08:10:19.691674 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-7x42z" event={"ID":"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8","Type":"ContainerStarted","Data":"872a1d52e4e7762f61f87cf9f200fc0c3aae697c3f41a540eb1bc95d926012b6"} Jan 27 08:10:19 crc kubenswrapper[4787]: I0127 08:10:19.699229 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" event={"ID":"25481a95-2112-4569-8db7-38a3bc2f2137","Type":"ContainerStarted","Data":"50c79c9d4510d9aad33c8c6b7d7b07dfbec318be7cf6d78cc13f3f9ebe9f87c3"} Jan 27 08:10:19 crc kubenswrapper[4787]: I0127 08:10:19.699276 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" event={"ID":"25481a95-2112-4569-8db7-38a3bc2f2137","Type":"ContainerStarted","Data":"513f29482f207a5eac5aadf34dfc86089d3ef24e51546c32e97b5dc398026284"} Jan 27 08:10:22 crc kubenswrapper[4787]: I0127 08:10:22.736187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-7x42z" event={"ID":"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8","Type":"ContainerStarted","Data":"144343248d2b16217cf2dcb9392f4fccabd0a94e089b16a6bacae928d2fbb414"} Jan 27 08:10:22 crc kubenswrapper[4787]: I0127 08:10:22.738125 4787 generic.go:334] "Generic (PLEG): container finished" podID="25481a95-2112-4569-8db7-38a3bc2f2137" containerID="50c79c9d4510d9aad33c8c6b7d7b07dfbec318be7cf6d78cc13f3f9ebe9f87c3" exitCode=0 Jan 27 08:10:22 crc kubenswrapper[4787]: I0127 08:10:22.738170 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" event={"ID":"25481a95-2112-4569-8db7-38a3bc2f2137","Type":"ContainerDied","Data":"50c79c9d4510d9aad33c8c6b7d7b07dfbec318be7cf6d78cc13f3f9ebe9f87c3"} Jan 27 08:10:22 crc kubenswrapper[4787]: I0127 08:10:22.758013 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-7x42z" podStartSLOduration=1.541027396 podStartE2EDuration="4.757989944s" podCreationTimestamp="2026-01-27 08:10:18 +0000 UTC" firstStartedPulling="2026-01-27 08:10:19.008190455 +0000 UTC m=+1124.660545947" lastFinishedPulling="2026-01-27 08:10:22.225153003 +0000 UTC m=+1127.877508495" observedRunningTime="2026-01-27 08:10:22.757291657 +0000 UTC m=+1128.409647179" watchObservedRunningTime="2026-01-27 08:10:22.757989944 +0000 UTC m=+1128.410345436" Jan 27 08:10:22 crc kubenswrapper[4787]: I0127 08:10:22.758830 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" podStartSLOduration=5.758825375 podStartE2EDuration="5.758825375s" podCreationTimestamp="2026-01-27 08:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:19.725226829 +0000 UTC m=+1125.377582321" watchObservedRunningTime="2026-01-27 08:10:22.758825375 +0000 UTC m=+1128.411180877" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.108770 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.187530 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-config-data\") pod \"25481a95-2112-4569-8db7-38a3bc2f2137\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.187692 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-combined-ca-bundle\") pod \"25481a95-2112-4569-8db7-38a3bc2f2137\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.187790 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-fernet-keys\") pod \"25481a95-2112-4569-8db7-38a3bc2f2137\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.187927 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-scripts\") pod \"25481a95-2112-4569-8db7-38a3bc2f2137\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.187969 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-credential-keys\") pod \"25481a95-2112-4569-8db7-38a3bc2f2137\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.188531 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn7jh\" (UniqueName: \"kubernetes.io/projected/25481a95-2112-4569-8db7-38a3bc2f2137-kube-api-access-bn7jh\") pod \"25481a95-2112-4569-8db7-38a3bc2f2137\" (UID: \"25481a95-2112-4569-8db7-38a3bc2f2137\") " Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.193919 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-scripts" (OuterVolumeSpecName: "scripts") pod "25481a95-2112-4569-8db7-38a3bc2f2137" (UID: "25481a95-2112-4569-8db7-38a3bc2f2137"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.193940 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "25481a95-2112-4569-8db7-38a3bc2f2137" (UID: "25481a95-2112-4569-8db7-38a3bc2f2137"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.194377 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "25481a95-2112-4569-8db7-38a3bc2f2137" (UID: "25481a95-2112-4569-8db7-38a3bc2f2137"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.194632 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25481a95-2112-4569-8db7-38a3bc2f2137-kube-api-access-bn7jh" (OuterVolumeSpecName: "kube-api-access-bn7jh") pod "25481a95-2112-4569-8db7-38a3bc2f2137" (UID: "25481a95-2112-4569-8db7-38a3bc2f2137"). InnerVolumeSpecName "kube-api-access-bn7jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.210221 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-config-data" (OuterVolumeSpecName: "config-data") pod "25481a95-2112-4569-8db7-38a3bc2f2137" (UID: "25481a95-2112-4569-8db7-38a3bc2f2137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.213797 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25481a95-2112-4569-8db7-38a3bc2f2137" (UID: "25481a95-2112-4569-8db7-38a3bc2f2137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.290926 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.290973 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.290988 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn7jh\" (UniqueName: \"kubernetes.io/projected/25481a95-2112-4569-8db7-38a3bc2f2137-kube-api-access-bn7jh\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.291003 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.291018 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.291031 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25481a95-2112-4569-8db7-38a3bc2f2137-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.760839 4787 generic.go:334] "Generic (PLEG): container finished" podID="54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" containerID="144343248d2b16217cf2dcb9392f4fccabd0a94e089b16a6bacae928d2fbb414" exitCode=0 Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.760938 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-7x42z" event={"ID":"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8","Type":"ContainerDied","Data":"144343248d2b16217cf2dcb9392f4fccabd0a94e089b16a6bacae928d2fbb414"} Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.765673 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.765627 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gzqt8" event={"ID":"25481a95-2112-4569-8db7-38a3bc2f2137","Type":"ContainerDied","Data":"513f29482f207a5eac5aadf34dfc86089d3ef24e51546c32e97b5dc398026284"} Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.765963 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="513f29482f207a5eac5aadf34dfc86089d3ef24e51546c32e97b5dc398026284" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.878266 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gzqt8"] Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.886101 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gzqt8"] Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.968165 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-bn5rd"] Jan 27 08:10:24 crc kubenswrapper[4787]: E0127 08:10:24.968641 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25481a95-2112-4569-8db7-38a3bc2f2137" containerName="keystone-bootstrap" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.968658 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="25481a95-2112-4569-8db7-38a3bc2f2137" containerName="keystone-bootstrap" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.968873 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="25481a95-2112-4569-8db7-38a3bc2f2137" containerName="keystone-bootstrap" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.969800 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.972260 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.972488 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.972487 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.978663 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.979280 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-z2dpt" Jan 27 08:10:24 crc kubenswrapper[4787]: I0127 08:10:24.990432 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-bn5rd"] Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.089108 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25481a95-2112-4569-8db7-38a3bc2f2137" path="/var/lib/kubelet/pods/25481a95-2112-4569-8db7-38a3bc2f2137/volumes" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.118187 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-combined-ca-bundle\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.118253 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-credential-keys\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.118750 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-config-data\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.118859 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb569\" (UniqueName: \"kubernetes.io/projected/6d885638-7008-41c1-85fc-43025469e404-kube-api-access-vb569\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.118897 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-fernet-keys\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.118927 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-scripts\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.220594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-config-data\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.220681 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb569\" (UniqueName: \"kubernetes.io/projected/6d885638-7008-41c1-85fc-43025469e404-kube-api-access-vb569\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.220713 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-fernet-keys\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.220748 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-scripts\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.220799 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-combined-ca-bundle\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.220825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-credential-keys\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.227167 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-config-data\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.227364 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-scripts\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.227642 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-fernet-keys\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.228297 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-credential-keys\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.228968 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-combined-ca-bundle\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.242505 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb569\" (UniqueName: \"kubernetes.io/projected/6d885638-7008-41c1-85fc-43025469e404-kube-api-access-vb569\") pod \"keystone-bootstrap-bn5rd\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.296675 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.733700 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-bn5rd"] Jan 27 08:10:25 crc kubenswrapper[4787]: W0127 08:10:25.739279 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d885638_7008_41c1_85fc_43025469e404.slice/crio-1539c8370b4e55ac0c5e0d82bf5a698b7c243437a52cb95a840651309378bd7c WatchSource:0}: Error finding container 1539c8370b4e55ac0c5e0d82bf5a698b7c243437a52cb95a840651309378bd7c: Status 404 returned error can't find the container with id 1539c8370b4e55ac0c5e0d82bf5a698b7c243437a52cb95a840651309378bd7c Jan 27 08:10:25 crc kubenswrapper[4787]: I0127 08:10:25.775413 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" event={"ID":"6d885638-7008-41c1-85fc-43025469e404","Type":"ContainerStarted","Data":"1539c8370b4e55ac0c5e0d82bf5a698b7c243437a52cb95a840651309378bd7c"} Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.091833 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.240913 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-config-data\") pod \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.241883 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-logs\") pod \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.242005 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-scripts\") pod \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.242356 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-combined-ca-bundle\") pod \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.242385 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pfpx\" (UniqueName: \"kubernetes.io/projected/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-kube-api-access-5pfpx\") pod \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\" (UID: \"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8\") " Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.243075 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-logs" (OuterVolumeSpecName: "logs") pod "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" (UID: "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.247857 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-scripts" (OuterVolumeSpecName: "scripts") pod "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" (UID: "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.248786 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-kube-api-access-5pfpx" (OuterVolumeSpecName: "kube-api-access-5pfpx") pod "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" (UID: "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8"). InnerVolumeSpecName "kube-api-access-5pfpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.268918 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-config-data" (OuterVolumeSpecName: "config-data") pod "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" (UID: "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.271954 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" (UID: "54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.344535 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.344634 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pfpx\" (UniqueName: \"kubernetes.io/projected/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-kube-api-access-5pfpx\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.344667 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.344694 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.344716 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.801841 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-7x42z" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.801883 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-7x42z" event={"ID":"54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8","Type":"ContainerDied","Data":"872a1d52e4e7762f61f87cf9f200fc0c3aae697c3f41a540eb1bc95d926012b6"} Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.802684 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="872a1d52e4e7762f61f87cf9f200fc0c3aae697c3f41a540eb1bc95d926012b6" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.808050 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" event={"ID":"6d885638-7008-41c1-85fc-43025469e404","Type":"ContainerStarted","Data":"dcf70ae1779bf3b7d74781efddb0114399b6ad2886db99535b6b73b54d7d6a8a"} Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.849616 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" podStartSLOduration=2.849593424 podStartE2EDuration="2.849593424s" podCreationTimestamp="2026-01-27 08:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:26.848115918 +0000 UTC m=+1132.500471430" watchObservedRunningTime="2026-01-27 08:10:26.849593424 +0000 UTC m=+1132.501948926" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.919211 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-545ff75684-kqpgf"] Jan 27 08:10:26 crc kubenswrapper[4787]: E0127 08:10:26.919722 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" containerName="placement-db-sync" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.919746 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" containerName="placement-db-sync" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.922650 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" containerName="placement-db-sync" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.924156 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.930226 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.930765 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.932119 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-7wn8v" Jan 27 08:10:26 crc kubenswrapper[4787]: I0127 08:10:26.936847 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-545ff75684-kqpgf"] Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.056546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4kkh\" (UniqueName: \"kubernetes.io/projected/7d647302-7ff9-465d-b5a5-a3a76f35df28-kube-api-access-p4kkh\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.056986 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d647302-7ff9-465d-b5a5-a3a76f35df28-logs\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.057212 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-scripts\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.057309 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-combined-ca-bundle\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.057418 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-config-data\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.159633 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-config-data\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.159797 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4kkh\" (UniqueName: \"kubernetes.io/projected/7d647302-7ff9-465d-b5a5-a3a76f35df28-kube-api-access-p4kkh\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.159902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d647302-7ff9-465d-b5a5-a3a76f35df28-logs\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.160059 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-scripts\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.160422 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-combined-ca-bundle\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.164196 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d647302-7ff9-465d-b5a5-a3a76f35df28-logs\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.169177 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-scripts\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.179435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-config-data\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.179910 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d647302-7ff9-465d-b5a5-a3a76f35df28-combined-ca-bundle\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.191436 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4kkh\" (UniqueName: \"kubernetes.io/projected/7d647302-7ff9-465d-b5a5-a3a76f35df28-kube-api-access-p4kkh\") pod \"placement-545ff75684-kqpgf\" (UID: \"7d647302-7ff9-465d-b5a5-a3a76f35df28\") " pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.245295 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.768125 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-545ff75684-kqpgf"] Jan 27 08:10:27 crc kubenswrapper[4787]: I0127 08:10:27.817302 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-545ff75684-kqpgf" event={"ID":"7d647302-7ff9-465d-b5a5-a3a76f35df28","Type":"ContainerStarted","Data":"2e17e25cf11e8b834817e98069cc4bdbc698fede2ca94b56564f6522ff9488f3"} Jan 27 08:10:30 crc kubenswrapper[4787]: I0127 08:10:30.843226 4787 generic.go:334] "Generic (PLEG): container finished" podID="6d885638-7008-41c1-85fc-43025469e404" containerID="dcf70ae1779bf3b7d74781efddb0114399b6ad2886db99535b6b73b54d7d6a8a" exitCode=0 Jan 27 08:10:30 crc kubenswrapper[4787]: I0127 08:10:30.843380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" event={"ID":"6d885638-7008-41c1-85fc-43025469e404","Type":"ContainerDied","Data":"dcf70ae1779bf3b7d74781efddb0114399b6ad2886db99535b6b73b54d7d6a8a"} Jan 27 08:10:30 crc kubenswrapper[4787]: I0127 08:10:30.849787 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-545ff75684-kqpgf" event={"ID":"7d647302-7ff9-465d-b5a5-a3a76f35df28","Type":"ContainerStarted","Data":"561fd098a927e0f1c0e5e6d9d248dfa5eaae4828ebf0342272a35906a29f1571"} Jan 27 08:10:30 crc kubenswrapper[4787]: I0127 08:10:30.849957 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-545ff75684-kqpgf" event={"ID":"7d647302-7ff9-465d-b5a5-a3a76f35df28","Type":"ContainerStarted","Data":"bb3404cc80cba5655f9e2ee8dc067fd50f8c9cd1dd1a18135ebb1ca4dfe6afa9"} Jan 27 08:10:30 crc kubenswrapper[4787]: I0127 08:10:30.851430 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:30 crc kubenswrapper[4787]: I0127 08:10:30.851584 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:30 crc kubenswrapper[4787]: I0127 08:10:30.904824 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-545ff75684-kqpgf" podStartSLOduration=4.904799191 podStartE2EDuration="4.904799191s" podCreationTimestamp="2026-01-27 08:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:30.903382116 +0000 UTC m=+1136.555737678" watchObservedRunningTime="2026-01-27 08:10:30.904799191 +0000 UTC m=+1136.557154703" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.220868 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.276094 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-fernet-keys\") pod \"6d885638-7008-41c1-85fc-43025469e404\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.276184 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-combined-ca-bundle\") pod \"6d885638-7008-41c1-85fc-43025469e404\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.276342 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb569\" (UniqueName: \"kubernetes.io/projected/6d885638-7008-41c1-85fc-43025469e404-kube-api-access-vb569\") pod \"6d885638-7008-41c1-85fc-43025469e404\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.276397 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-scripts\") pod \"6d885638-7008-41c1-85fc-43025469e404\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.276416 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-config-data\") pod \"6d885638-7008-41c1-85fc-43025469e404\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.276476 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-credential-keys\") pod \"6d885638-7008-41c1-85fc-43025469e404\" (UID: \"6d885638-7008-41c1-85fc-43025469e404\") " Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.283620 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d885638-7008-41c1-85fc-43025469e404-kube-api-access-vb569" (OuterVolumeSpecName: "kube-api-access-vb569") pod "6d885638-7008-41c1-85fc-43025469e404" (UID: "6d885638-7008-41c1-85fc-43025469e404"). InnerVolumeSpecName "kube-api-access-vb569". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.283681 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6d885638-7008-41c1-85fc-43025469e404" (UID: "6d885638-7008-41c1-85fc-43025469e404"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.283828 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-scripts" (OuterVolumeSpecName: "scripts") pod "6d885638-7008-41c1-85fc-43025469e404" (UID: "6d885638-7008-41c1-85fc-43025469e404"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.287502 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6d885638-7008-41c1-85fc-43025469e404" (UID: "6d885638-7008-41c1-85fc-43025469e404"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.306229 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-config-data" (OuterVolumeSpecName: "config-data") pod "6d885638-7008-41c1-85fc-43025469e404" (UID: "6d885638-7008-41c1-85fc-43025469e404"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.306735 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d885638-7008-41c1-85fc-43025469e404" (UID: "6d885638-7008-41c1-85fc-43025469e404"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.378470 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb569\" (UniqueName: \"kubernetes.io/projected/6d885638-7008-41c1-85fc-43025469e404-kube-api-access-vb569\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.378507 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.378516 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.378525 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.378534 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.378542 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d885638-7008-41c1-85fc-43025469e404-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.869951 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" event={"ID":"6d885638-7008-41c1-85fc-43025469e404","Type":"ContainerDied","Data":"1539c8370b4e55ac0c5e0d82bf5a698b7c243437a52cb95a840651309378bd7c"} Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.870401 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1539c8370b4e55ac0c5e0d82bf5a698b7c243437a52cb95a840651309378bd7c" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.870008 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-bn5rd" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.963835 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-678dff9ff-k6jwl"] Jan 27 08:10:32 crc kubenswrapper[4787]: E0127 08:10:32.964216 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d885638-7008-41c1-85fc-43025469e404" containerName="keystone-bootstrap" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.964231 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d885638-7008-41c1-85fc-43025469e404" containerName="keystone-bootstrap" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.964406 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d885638-7008-41c1-85fc-43025469e404" containerName="keystone-bootstrap" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.965066 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.967760 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-z2dpt" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.971268 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.974536 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.979300 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 27 08:10:32 crc kubenswrapper[4787]: I0127 08:10:32.982851 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-678dff9ff-k6jwl"] Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.092141 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzkr\" (UniqueName: \"kubernetes.io/projected/91494b32-1284-49fb-b548-3fbc5f1e1ddf-kube-api-access-llzkr\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.092198 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-credential-keys\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.092390 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-combined-ca-bundle\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.092756 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-config-data\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.092920 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-fernet-keys\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.093104 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-scripts\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.194593 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llzkr\" (UniqueName: \"kubernetes.io/projected/91494b32-1284-49fb-b548-3fbc5f1e1ddf-kube-api-access-llzkr\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.194644 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-credential-keys\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.194685 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-combined-ca-bundle\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.194752 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-config-data\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.194789 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-fernet-keys\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.194824 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-scripts\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.200608 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-scripts\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.202736 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-config-data\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.207292 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-credential-keys\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.207917 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-combined-ca-bundle\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.214323 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91494b32-1284-49fb-b548-3fbc5f1e1ddf-fernet-keys\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.263053 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llzkr\" (UniqueName: \"kubernetes.io/projected/91494b32-1284-49fb-b548-3fbc5f1e1ddf-kube-api-access-llzkr\") pod \"keystone-678dff9ff-k6jwl\" (UID: \"91494b32-1284-49fb-b548-3fbc5f1e1ddf\") " pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.284981 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.768076 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-678dff9ff-k6jwl"] Jan 27 08:10:33 crc kubenswrapper[4787]: I0127 08:10:33.880849 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" event={"ID":"91494b32-1284-49fb-b548-3fbc5f1e1ddf","Type":"ContainerStarted","Data":"fe183bddc005b46d8e0a08fe43c497acbbbe6d36f6f9ffe76f7d65ae4c116664"} Jan 27 08:10:34 crc kubenswrapper[4787]: I0127 08:10:34.890939 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" event={"ID":"91494b32-1284-49fb-b548-3fbc5f1e1ddf","Type":"ContainerStarted","Data":"e517dd65d86b2e96567d28e4ec1b91bea1fbb099c07e3b575ef26f54f268fc51"} Jan 27 08:10:34 crc kubenswrapper[4787]: I0127 08:10:34.891076 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:10:34 crc kubenswrapper[4787]: I0127 08:10:34.928902 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" podStartSLOduration=2.9288756830000002 podStartE2EDuration="2.928875683s" podCreationTimestamp="2026-01-27 08:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:34.916064818 +0000 UTC m=+1140.568420320" watchObservedRunningTime="2026-01-27 08:10:34.928875683 +0000 UTC m=+1140.581231195" Jan 27 08:10:58 crc kubenswrapper[4787]: I0127 08:10:58.373019 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:10:58 crc kubenswrapper[4787]: I0127 08:10:58.375070 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-545ff75684-kqpgf" Jan 27 08:11:04 crc kubenswrapper[4787]: I0127 08:11:04.899595 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-678dff9ff-k6jwl" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.413112 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.414970 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.418695 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.419085 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.419430 4787 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-bg2tl" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.426802 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.601676 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4826cdba-006d-447f-a206-367bd1dc8893-openstack-config-secret\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.601892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkggv\" (UniqueName: \"kubernetes.io/projected/4826cdba-006d-447f-a206-367bd1dc8893-kube-api-access-jkggv\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.602295 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4826cdba-006d-447f-a206-367bd1dc8893-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.602354 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4826cdba-006d-447f-a206-367bd1dc8893-openstack-config\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.703909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4826cdba-006d-447f-a206-367bd1dc8893-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.703958 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4826cdba-006d-447f-a206-367bd1dc8893-openstack-config\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.704009 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4826cdba-006d-447f-a206-367bd1dc8893-openstack-config-secret\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.704043 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkggv\" (UniqueName: \"kubernetes.io/projected/4826cdba-006d-447f-a206-367bd1dc8893-kube-api-access-jkggv\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.706051 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4826cdba-006d-447f-a206-367bd1dc8893-openstack-config\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.711701 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4826cdba-006d-447f-a206-367bd1dc8893-openstack-config-secret\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.718296 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4826cdba-006d-447f-a206-367bd1dc8893-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.720649 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkggv\" (UniqueName: \"kubernetes.io/projected/4826cdba-006d-447f-a206-367bd1dc8893-kube-api-access-jkggv\") pod \"openstackclient\" (UID: \"4826cdba-006d-447f-a206-367bd1dc8893\") " pod="nova-kuttl-default/openstackclient" Jan 27 08:11:09 crc kubenswrapper[4787]: I0127 08:11:09.740203 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 27 08:11:10 crc kubenswrapper[4787]: I0127 08:11:10.216937 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 27 08:11:11 crc kubenswrapper[4787]: I0127 08:11:11.208631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"4826cdba-006d-447f-a206-367bd1dc8893","Type":"ContainerStarted","Data":"a77022d652bfe8800a86ef8cf777ec9be7b6f1f6b19e884420ba2176df40c740"} Jan 27 08:11:18 crc kubenswrapper[4787]: I0127 08:11:18.268943 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"4826cdba-006d-447f-a206-367bd1dc8893","Type":"ContainerStarted","Data":"700a782a32d14e996c4275c291752160fe2343396aef23102a9c39a0782e7024"} Jan 27 08:11:18 crc kubenswrapper[4787]: I0127 08:11:18.284985 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=2.00080998 podStartE2EDuration="9.284965773s" podCreationTimestamp="2026-01-27 08:11:09 +0000 UTC" firstStartedPulling="2026-01-27 08:11:10.225232852 +0000 UTC m=+1175.877588344" lastFinishedPulling="2026-01-27 08:11:17.509388605 +0000 UTC m=+1183.161744137" observedRunningTime="2026-01-27 08:11:18.283869916 +0000 UTC m=+1183.936225408" watchObservedRunningTime="2026-01-27 08:11:18.284965773 +0000 UTC m=+1183.937321265" Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.404855 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs"] Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.405809 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" podUID="3ad17a28-2cb1-4455-8d38-58919a287ae6" containerName="manager" containerID="cri-o://e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488" gracePeriod=10 Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.813958 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-w98b8"] Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.855318 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt"] Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.879559 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-w98b8" Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.879934 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.888663 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-w98b8"] Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.914529 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.926006 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-6ndhv" Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.942330 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6bn4\" (UniqueName: \"kubernetes.io/projected/d837af28-1f85-4995-9299-df5a288df488-kube-api-access-j6bn4\") pod \"nova-operator-index-w98b8\" (UID: \"d837af28-1f85-4995-9299-df5a288df488\") " pod="openstack-operators/nova-operator-index-w98b8" Jan 27 08:11:27 crc kubenswrapper[4787]: I0127 08:11:27.942459 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkxsb\" (UniqueName: \"kubernetes.io/projected/b0a80c53-be1f-444d-a49a-05c89daad297-kube-api-access-bkxsb\") pod \"nova-operator-controller-manager-57d6b69d8b-j9nxt\" (UID: \"b0a80c53-be1f-444d-a49a-05c89daad297\") " pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.043591 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vdjb\" (UniqueName: \"kubernetes.io/projected/3ad17a28-2cb1-4455-8d38-58919a287ae6-kube-api-access-2vdjb\") pod \"3ad17a28-2cb1-4455-8d38-58919a287ae6\" (UID: \"3ad17a28-2cb1-4455-8d38-58919a287ae6\") " Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.044706 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkxsb\" (UniqueName: \"kubernetes.io/projected/b0a80c53-be1f-444d-a49a-05c89daad297-kube-api-access-bkxsb\") pod \"nova-operator-controller-manager-57d6b69d8b-j9nxt\" (UID: \"b0a80c53-be1f-444d-a49a-05c89daad297\") " pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.044767 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6bn4\" (UniqueName: \"kubernetes.io/projected/d837af28-1f85-4995-9299-df5a288df488-kube-api-access-j6bn4\") pod \"nova-operator-index-w98b8\" (UID: \"d837af28-1f85-4995-9299-df5a288df488\") " pod="openstack-operators/nova-operator-index-w98b8" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.053191 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad17a28-2cb1-4455-8d38-58919a287ae6-kube-api-access-2vdjb" (OuterVolumeSpecName: "kube-api-access-2vdjb") pod "3ad17a28-2cb1-4455-8d38-58919a287ae6" (UID: "3ad17a28-2cb1-4455-8d38-58919a287ae6"). InnerVolumeSpecName "kube-api-access-2vdjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.077721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkxsb\" (UniqueName: \"kubernetes.io/projected/b0a80c53-be1f-444d-a49a-05c89daad297-kube-api-access-bkxsb\") pod \"nova-operator-controller-manager-57d6b69d8b-j9nxt\" (UID: \"b0a80c53-be1f-444d-a49a-05c89daad297\") " pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.109814 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6bn4\" (UniqueName: \"kubernetes.io/projected/d837af28-1f85-4995-9299-df5a288df488-kube-api-access-j6bn4\") pod \"nova-operator-index-w98b8\" (UID: \"d837af28-1f85-4995-9299-df5a288df488\") " pod="openstack-operators/nova-operator-index-w98b8" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.146565 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vdjb\" (UniqueName: \"kubernetes.io/projected/3ad17a28-2cb1-4455-8d38-58919a287ae6-kube-api-access-2vdjb\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.223710 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt"] Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.236031 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-w98b8" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.244282 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.280572 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq"] Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.281771 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" podUID="86e4bb51-0d54-49f8-b158-8338d845d92e" containerName="operator" containerID="cri-o://5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f" gracePeriod=10 Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.354899 4787 generic.go:334] "Generic (PLEG): container finished" podID="3ad17a28-2cb1-4455-8d38-58919a287ae6" containerID="e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488" exitCode=0 Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.354959 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.354956 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" event={"ID":"3ad17a28-2cb1-4455-8d38-58919a287ae6","Type":"ContainerDied","Data":"e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488"} Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.356219 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs" event={"ID":"3ad17a28-2cb1-4455-8d38-58919a287ae6","Type":"ContainerDied","Data":"2b4b23f00d60ad642771edd29b4d11331ba1889dbee6abb9a2f10d40a9233501"} Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.356261 4787 scope.go:117] "RemoveContainer" containerID="e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.435703 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs"] Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.440752 4787 scope.go:117] "RemoveContainer" containerID="e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488" Jan 27 08:11:28 crc kubenswrapper[4787]: E0127 08:11:28.443728 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488\": container with ID starting with e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488 not found: ID does not exist" containerID="e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.443797 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488"} err="failed to get container status \"e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488\": rpc error: code = NotFound desc = could not find container \"e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488\": container with ID starting with e67cc281b7eb394ed1cf80319838ad97dd8c88447cab1e5fbeb55f20bbbe9488 not found: ID does not exist" Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.451841 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-s8kvs"] Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.821918 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt"] Jan 27 08:11:28 crc kubenswrapper[4787]: I0127 08:11:28.967427 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-w98b8"] Jan 27 08:11:29 crc kubenswrapper[4787]: W0127 08:11:29.007783 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd837af28_1f85_4995_9299_df5a288df488.slice/crio-a503e0936eba5a61fa705651473098014686ae86a0e2f0c7e561df05e7f9307f WatchSource:0}: Error finding container a503e0936eba5a61fa705651473098014686ae86a0e2f0c7e561df05e7f9307f: Status 404 returned error can't find the container with id a503e0936eba5a61fa705651473098014686ae86a0e2f0c7e561df05e7f9307f Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.040655 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.058974 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lftt5\" (UniqueName: \"kubernetes.io/projected/86e4bb51-0d54-49f8-b158-8338d845d92e-kube-api-access-lftt5\") pod \"86e4bb51-0d54-49f8-b158-8338d845d92e\" (UID: \"86e4bb51-0d54-49f8-b158-8338d845d92e\") " Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.065830 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e4bb51-0d54-49f8-b158-8338d845d92e-kube-api-access-lftt5" (OuterVolumeSpecName: "kube-api-access-lftt5") pod "86e4bb51-0d54-49f8-b158-8338d845d92e" (UID: "86e4bb51-0d54-49f8-b158-8338d845d92e"). InnerVolumeSpecName "kube-api-access-lftt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.088863 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad17a28-2cb1-4455-8d38-58919a287ae6" path="/var/lib/kubelet/pods/3ad17a28-2cb1-4455-8d38-58919a287ae6/volumes" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.160227 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lftt5\" (UniqueName: \"kubernetes.io/projected/86e4bb51-0d54-49f8-b158-8338d845d92e-kube-api-access-lftt5\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.364468 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-w98b8" event={"ID":"d837af28-1f85-4995-9299-df5a288df488","Type":"ContainerStarted","Data":"a503e0936eba5a61fa705651473098014686ae86a0e2f0c7e561df05e7f9307f"} Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.367175 4787 generic.go:334] "Generic (PLEG): container finished" podID="86e4bb51-0d54-49f8-b158-8338d845d92e" containerID="5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f" exitCode=0 Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.367242 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" event={"ID":"86e4bb51-0d54-49f8-b158-8338d845d92e","Type":"ContainerDied","Data":"5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f"} Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.367247 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.367269 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq" event={"ID":"86e4bb51-0d54-49f8-b158-8338d845d92e","Type":"ContainerDied","Data":"0a86cb2179439de11cc5dfa2c7d576a7b85ad83b8b80b1dd922ae9e6f93584cf"} Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.367292 4787 scope.go:117] "RemoveContainer" containerID="5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.369592 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" event={"ID":"b0a80c53-be1f-444d-a49a-05c89daad297","Type":"ContainerStarted","Data":"48b79785028da806daee7f9f9c684f5e147f8b578e7cccc7511b149d38aa6cce"} Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.369621 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" event={"ID":"b0a80c53-be1f-444d-a49a-05c89daad297","Type":"ContainerStarted","Data":"cf3c11c74cb52d6c2694795eb36955d9b2e7cc2dd460a9793ab3f571101c7cfb"} Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.370297 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.391974 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" podStartSLOduration=2.391951648 podStartE2EDuration="2.391951648s" podCreationTimestamp="2026-01-27 08:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:11:29.3863365 +0000 UTC m=+1195.038691992" watchObservedRunningTime="2026-01-27 08:11:29.391951648 +0000 UTC m=+1195.044307140" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.417614 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq"] Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.423258 4787 scope.go:117] "RemoveContainer" containerID="5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f" Jan 27 08:11:29 crc kubenswrapper[4787]: E0127 08:11:29.423651 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f\": container with ID starting with 5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f not found: ID does not exist" containerID="5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.423699 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f"} err="failed to get container status \"5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f\": rpc error: code = NotFound desc = could not find container \"5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f\": container with ID starting with 5cbc51ae998c8ec5e3d568f273050ca82bb17cea0dd97580f6bd144d80fe9b5f not found: ID does not exist" Jan 27 08:11:29 crc kubenswrapper[4787]: I0127 08:11:29.423853 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59cc4f5964-ch4bq"] Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.378520 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-w98b8" event={"ID":"d837af28-1f85-4995-9299-df5a288df488","Type":"ContainerStarted","Data":"d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf"} Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.407470 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-w98b8" podStartSLOduration=2.730769323 podStartE2EDuration="3.407438157s" podCreationTimestamp="2026-01-27 08:11:27 +0000 UTC" firstStartedPulling="2026-01-27 08:11:29.013682619 +0000 UTC m=+1194.666038101" lastFinishedPulling="2026-01-27 08:11:29.690351433 +0000 UTC m=+1195.342706935" observedRunningTime="2026-01-27 08:11:30.398868585 +0000 UTC m=+1196.051224097" watchObservedRunningTime="2026-01-27 08:11:30.407438157 +0000 UTC m=+1196.059793649" Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.499349 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-w98b8"] Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.908977 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-2jn9v"] Jan 27 08:11:30 crc kubenswrapper[4787]: E0127 08:11:30.909482 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad17a28-2cb1-4455-8d38-58919a287ae6" containerName="manager" Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.909506 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad17a28-2cb1-4455-8d38-58919a287ae6" containerName="manager" Jan 27 08:11:30 crc kubenswrapper[4787]: E0127 08:11:30.909597 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e4bb51-0d54-49f8-b158-8338d845d92e" containerName="operator" Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.909606 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e4bb51-0d54-49f8-b158-8338d845d92e" containerName="operator" Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.909814 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e4bb51-0d54-49f8-b158-8338d845d92e" containerName="operator" Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.909832 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad17a28-2cb1-4455-8d38-58919a287ae6" containerName="manager" Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.910748 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:30 crc kubenswrapper[4787]: I0127 08:11:30.926335 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-2jn9v"] Jan 27 08:11:31 crc kubenswrapper[4787]: I0127 08:11:31.089153 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e4bb51-0d54-49f8-b158-8338d845d92e" path="/var/lib/kubelet/pods/86e4bb51-0d54-49f8-b158-8338d845d92e/volumes" Jan 27 08:11:31 crc kubenswrapper[4787]: I0127 08:11:31.094424 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgvxp\" (UniqueName: \"kubernetes.io/projected/804d50f1-7da5-4d0c-8c0e-cac4c5e5a244-kube-api-access-tgvxp\") pod \"nova-operator-index-2jn9v\" (UID: \"804d50f1-7da5-4d0c-8c0e-cac4c5e5a244\") " pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:31 crc kubenswrapper[4787]: I0127 08:11:31.196124 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgvxp\" (UniqueName: \"kubernetes.io/projected/804d50f1-7da5-4d0c-8c0e-cac4c5e5a244-kube-api-access-tgvxp\") pod \"nova-operator-index-2jn9v\" (UID: \"804d50f1-7da5-4d0c-8c0e-cac4c5e5a244\") " pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:31 crc kubenswrapper[4787]: I0127 08:11:31.232763 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgvxp\" (UniqueName: \"kubernetes.io/projected/804d50f1-7da5-4d0c-8c0e-cac4c5e5a244-kube-api-access-tgvxp\") pod \"nova-operator-index-2jn9v\" (UID: \"804d50f1-7da5-4d0c-8c0e-cac4c5e5a244\") " pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:31 crc kubenswrapper[4787]: I0127 08:11:31.303039 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:31 crc kubenswrapper[4787]: I0127 08:11:31.801360 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-2jn9v"] Jan 27 08:11:32 crc kubenswrapper[4787]: I0127 08:11:32.401829 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-2jn9v" event={"ID":"804d50f1-7da5-4d0c-8c0e-cac4c5e5a244","Type":"ContainerStarted","Data":"fe27269b0a3babaa627cfac432133034ec795f2df0a87c61ed76f07a1949f31d"} Jan 27 08:11:32 crc kubenswrapper[4787]: I0127 08:11:32.402303 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-2jn9v" event={"ID":"804d50f1-7da5-4d0c-8c0e-cac4c5e5a244","Type":"ContainerStarted","Data":"a1549453f2fb0c0766b27c9809d64ccb6d4fa4219b47e921cc333e5c584bad78"} Jan 27 08:11:32 crc kubenswrapper[4787]: I0127 08:11:32.401951 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-index-w98b8" podUID="d837af28-1f85-4995-9299-df5a288df488" containerName="registry-server" containerID="cri-o://d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf" gracePeriod=2 Jan 27 08:11:32 crc kubenswrapper[4787]: I0127 08:11:32.431379 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-2jn9v" podStartSLOduration=2.377057616 podStartE2EDuration="2.431358239s" podCreationTimestamp="2026-01-27 08:11:30 +0000 UTC" firstStartedPulling="2026-01-27 08:11:31.816523473 +0000 UTC m=+1197.468878965" lastFinishedPulling="2026-01-27 08:11:31.870824096 +0000 UTC m=+1197.523179588" observedRunningTime="2026-01-27 08:11:32.422595173 +0000 UTC m=+1198.074950685" watchObservedRunningTime="2026-01-27 08:11:32.431358239 +0000 UTC m=+1198.083713731" Jan 27 08:11:32 crc kubenswrapper[4787]: I0127 08:11:32.879189 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-w98b8" Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.034044 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6bn4\" (UniqueName: \"kubernetes.io/projected/d837af28-1f85-4995-9299-df5a288df488-kube-api-access-j6bn4\") pod \"d837af28-1f85-4995-9299-df5a288df488\" (UID: \"d837af28-1f85-4995-9299-df5a288df488\") " Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.040727 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d837af28-1f85-4995-9299-df5a288df488-kube-api-access-j6bn4" (OuterVolumeSpecName: "kube-api-access-j6bn4") pod "d837af28-1f85-4995-9299-df5a288df488" (UID: "d837af28-1f85-4995-9299-df5a288df488"). InnerVolumeSpecName "kube-api-access-j6bn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.136508 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6bn4\" (UniqueName: \"kubernetes.io/projected/d837af28-1f85-4995-9299-df5a288df488-kube-api-access-j6bn4\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.413475 4787 generic.go:334] "Generic (PLEG): container finished" podID="d837af28-1f85-4995-9299-df5a288df488" containerID="d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf" exitCode=0 Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.413512 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-w98b8" Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.413529 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-w98b8" event={"ID":"d837af28-1f85-4995-9299-df5a288df488","Type":"ContainerDied","Data":"d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf"} Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.414098 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-w98b8" event={"ID":"d837af28-1f85-4995-9299-df5a288df488","Type":"ContainerDied","Data":"a503e0936eba5a61fa705651473098014686ae86a0e2f0c7e561df05e7f9307f"} Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.414129 4787 scope.go:117] "RemoveContainer" containerID="d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf" Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.442249 4787 scope.go:117] "RemoveContainer" containerID="d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf" Jan 27 08:11:33 crc kubenswrapper[4787]: E0127 08:11:33.442818 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf\": container with ID starting with d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf not found: ID does not exist" containerID="d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf" Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.442864 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf"} err="failed to get container status \"d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf\": rpc error: code = NotFound desc = could not find container \"d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf\": container with ID starting with d691ee43acf65ce1a09de83c30383fe2384fd18e046abf9eb4dedd4552b18cbf not found: ID does not exist" Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.456058 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-w98b8"] Jan 27 08:11:33 crc kubenswrapper[4787]: I0127 08:11:33.465726 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-index-w98b8"] Jan 27 08:11:35 crc kubenswrapper[4787]: I0127 08:11:35.090152 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d837af28-1f85-4995-9299-df5a288df488" path="/var/lib/kubelet/pods/d837af28-1f85-4995-9299-df5a288df488/volumes" Jan 27 08:11:38 crc kubenswrapper[4787]: I0127 08:11:38.247407 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-57d6b69d8b-j9nxt" Jan 27 08:11:41 crc kubenswrapper[4787]: I0127 08:11:41.303962 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:41 crc kubenswrapper[4787]: I0127 08:11:41.304024 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:41 crc kubenswrapper[4787]: I0127 08:11:41.343029 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:41 crc kubenswrapper[4787]: I0127 08:11:41.504399 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-2jn9v" Jan 27 08:11:52 crc kubenswrapper[4787]: I0127 08:11:52.823602 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:11:52 crc kubenswrapper[4787]: I0127 08:11:52.824407 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:11:54 crc kubenswrapper[4787]: I0127 08:11:54.952999 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2"] Jan 27 08:11:54 crc kubenswrapper[4787]: E0127 08:11:54.953815 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d837af28-1f85-4995-9299-df5a288df488" containerName="registry-server" Jan 27 08:11:54 crc kubenswrapper[4787]: I0127 08:11:54.953830 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d837af28-1f85-4995-9299-df5a288df488" containerName="registry-server" Jan 27 08:11:54 crc kubenswrapper[4787]: I0127 08:11:54.953984 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d837af28-1f85-4995-9299-df5a288df488" containerName="registry-server" Jan 27 08:11:54 crc kubenswrapper[4787]: I0127 08:11:54.955082 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:54 crc kubenswrapper[4787]: I0127 08:11:54.962983 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2"] Jan 27 08:11:54 crc kubenswrapper[4787]: I0127 08:11:54.965461 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7z9mk" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.057296 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftgl\" (UniqueName: \"kubernetes.io/projected/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-kube-api-access-sftgl\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.057405 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-util\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.057471 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-bundle\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.159506 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-bundle\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.159930 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sftgl\" (UniqueName: \"kubernetes.io/projected/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-kube-api-access-sftgl\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.160158 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-bundle\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.160367 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-util\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.161018 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-util\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.181369 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sftgl\" (UniqueName: \"kubernetes.io/projected/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-kube-api-access-sftgl\") pod \"666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.278922 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:11:55 crc kubenswrapper[4787]: I0127 08:11:55.789369 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2"] Jan 27 08:11:56 crc kubenswrapper[4787]: I0127 08:11:56.601392 4787 generic.go:334] "Generic (PLEG): container finished" podID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerID="22424c7a10f080df1cdbb0e1dd7570d87671d5e19608eda8e3504e4abb4919a6" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4787]: I0127 08:11:56.601487 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" event={"ID":"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5","Type":"ContainerDied","Data":"22424c7a10f080df1cdbb0e1dd7570d87671d5e19608eda8e3504e4abb4919a6"} Jan 27 08:11:56 crc kubenswrapper[4787]: I0127 08:11:56.601715 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" event={"ID":"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5","Type":"ContainerStarted","Data":"63e398e677bb8f944eb1dfb72e783fa606c3fbf7e709c710ccf82db09ea132a2"} Jan 27 08:11:59 crc kubenswrapper[4787]: I0127 08:11:59.626446 4787 generic.go:334] "Generic (PLEG): container finished" podID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerID="d2a7b14a2aa008c1cb7abb31f77cb88e86b92e22245e9dc2eebf5af98791cf56" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4787]: I0127 08:11:59.626624 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" event={"ID":"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5","Type":"ContainerDied","Data":"d2a7b14a2aa008c1cb7abb31f77cb88e86b92e22245e9dc2eebf5af98791cf56"} Jan 27 08:12:00 crc kubenswrapper[4787]: I0127 08:12:00.644319 4787 generic.go:334] "Generic (PLEG): container finished" podID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerID="3aef8535db6c863eb9ad8cbb5b308d1f9d18b7b0e842a6871d7f37a64dfcaae1" exitCode=0 Jan 27 08:12:00 crc kubenswrapper[4787]: I0127 08:12:00.644499 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" event={"ID":"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5","Type":"ContainerDied","Data":"3aef8535db6c863eb9ad8cbb5b308d1f9d18b7b0e842a6871d7f37a64dfcaae1"} Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.030324 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.095662 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sftgl\" (UniqueName: \"kubernetes.io/projected/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-kube-api-access-sftgl\") pod \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.095826 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-bundle\") pod \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.095944 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-util\") pod \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\" (UID: \"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5\") " Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.097599 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-bundle" (OuterVolumeSpecName: "bundle") pod "46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" (UID: "46a8c934-ec5d-48f9-8c39-f4e4c730d7a5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.103608 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-kube-api-access-sftgl" (OuterVolumeSpecName: "kube-api-access-sftgl") pod "46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" (UID: "46a8c934-ec5d-48f9-8c39-f4e4c730d7a5"). InnerVolumeSpecName "kube-api-access-sftgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.109847 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-util" (OuterVolumeSpecName: "util") pod "46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" (UID: "46a8c934-ec5d-48f9-8c39-f4e4c730d7a5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.197681 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.198156 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sftgl\" (UniqueName: \"kubernetes.io/projected/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-kube-api-access-sftgl\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.198172 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46a8c934-ec5d-48f9-8c39-f4e4c730d7a5-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.663160 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" event={"ID":"46a8c934-ec5d-48f9-8c39-f4e4c730d7a5","Type":"ContainerDied","Data":"63e398e677bb8f944eb1dfb72e783fa606c3fbf7e709c710ccf82db09ea132a2"} Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.663214 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63e398e677bb8f944eb1dfb72e783fa606c3fbf7e709c710ccf82db09ea132a2" Jan 27 08:12:02 crc kubenswrapper[4787]: I0127 08:12:02.663626 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2" Jan 27 08:12:22 crc kubenswrapper[4787]: I0127 08:12:22.822957 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:12:22 crc kubenswrapper[4787]: I0127 08:12:22.823876 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:12:52 crc kubenswrapper[4787]: I0127 08:12:52.822686 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:12:52 crc kubenswrapper[4787]: I0127 08:12:52.823585 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:12:52 crc kubenswrapper[4787]: I0127 08:12:52.823651 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 08:12:52 crc kubenswrapper[4787]: I0127 08:12:52.824537 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62fa4f58004172098a708acf93c4ba2c2d75c5b2ad098327437e26bfa28ab448"} pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:12:52 crc kubenswrapper[4787]: I0127 08:12:52.824632 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" containerID="cri-o://62fa4f58004172098a708acf93c4ba2c2d75c5b2ad098327437e26bfa28ab448" gracePeriod=600 Jan 27 08:12:53 crc kubenswrapper[4787]: I0127 08:12:53.075963 4787 generic.go:334] "Generic (PLEG): container finished" podID="f051e184-acac-47cf-9e04-9df648288715" containerID="62fa4f58004172098a708acf93c4ba2c2d75c5b2ad098327437e26bfa28ab448" exitCode=0 Jan 27 08:12:53 crc kubenswrapper[4787]: I0127 08:12:53.085237 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerDied","Data":"62fa4f58004172098a708acf93c4ba2c2d75c5b2ad098327437e26bfa28ab448"} Jan 27 08:12:53 crc kubenswrapper[4787]: I0127 08:12:53.085285 4787 scope.go:117] "RemoveContainer" containerID="8407f8fd138ff9e300a7e8488e02cc30a66d50ed89a7c4ef1d3339c94e2a22b5" Jan 27 08:12:54 crc kubenswrapper[4787]: I0127 08:12:54.087236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"66c62556ef822841ab93c99b23fda5b0b3ea129b94ebcca6f4cff0b2d8c7f32e"} Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.535303 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sdc6l"] Jan 27 08:14:25 crc kubenswrapper[4787]: E0127 08:14:25.536404 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerName="util" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.536418 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerName="util" Jan 27 08:14:25 crc kubenswrapper[4787]: E0127 08:14:25.536445 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerName="pull" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.536451 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerName="pull" Jan 27 08:14:25 crc kubenswrapper[4787]: E0127 08:14:25.536475 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerName="extract" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.536481 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerName="extract" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.536657 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a8c934-ec5d-48f9-8c39-f4e4c730d7a5" containerName="extract" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.537789 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.549114 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdc6l"] Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.677993 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqjn\" (UniqueName: \"kubernetes.io/projected/8218445e-5e55-47de-8494-09626ae390d7-kube-api-access-njqjn\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.678080 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-catalog-content\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.678111 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-utilities\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.779333 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-catalog-content\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.779380 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-utilities\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.779496 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqjn\" (UniqueName: \"kubernetes.io/projected/8218445e-5e55-47de-8494-09626ae390d7-kube-api-access-njqjn\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.779945 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-catalog-content\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.780208 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-utilities\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.817803 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqjn\" (UniqueName: \"kubernetes.io/projected/8218445e-5e55-47de-8494-09626ae390d7-kube-api-access-njqjn\") pod \"redhat-operators-sdc6l\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:25 crc kubenswrapper[4787]: I0127 08:14:25.857124 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:26 crc kubenswrapper[4787]: I0127 08:14:26.380496 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdc6l"] Jan 27 08:14:26 crc kubenswrapper[4787]: I0127 08:14:26.765534 4787 generic.go:334] "Generic (PLEG): container finished" podID="8218445e-5e55-47de-8494-09626ae390d7" containerID="280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc" exitCode=0 Jan 27 08:14:26 crc kubenswrapper[4787]: I0127 08:14:26.765609 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdc6l" event={"ID":"8218445e-5e55-47de-8494-09626ae390d7","Type":"ContainerDied","Data":"280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc"} Jan 27 08:14:26 crc kubenswrapper[4787]: I0127 08:14:26.765676 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdc6l" event={"ID":"8218445e-5e55-47de-8494-09626ae390d7","Type":"ContainerStarted","Data":"3f35b98bcba954c309b775902bd7e82ed8b70ba8f32820c61d2416b6243a6a93"} Jan 27 08:14:26 crc kubenswrapper[4787]: I0127 08:14:26.768141 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:14:27 crc kubenswrapper[4787]: I0127 08:14:27.776989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdc6l" event={"ID":"8218445e-5e55-47de-8494-09626ae390d7","Type":"ContainerStarted","Data":"e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034"} Jan 27 08:14:28 crc kubenswrapper[4787]: I0127 08:14:28.787014 4787 generic.go:334] "Generic (PLEG): container finished" podID="8218445e-5e55-47de-8494-09626ae390d7" containerID="e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034" exitCode=0 Jan 27 08:14:28 crc kubenswrapper[4787]: I0127 08:14:28.787141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdc6l" event={"ID":"8218445e-5e55-47de-8494-09626ae390d7","Type":"ContainerDied","Data":"e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034"} Jan 27 08:14:29 crc kubenswrapper[4787]: I0127 08:14:29.797079 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdc6l" event={"ID":"8218445e-5e55-47de-8494-09626ae390d7","Type":"ContainerStarted","Data":"dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96"} Jan 27 08:14:29 crc kubenswrapper[4787]: I0127 08:14:29.819887 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sdc6l" podStartSLOduration=2.364462915 podStartE2EDuration="4.819868013s" podCreationTimestamp="2026-01-27 08:14:25 +0000 UTC" firstStartedPulling="2026-01-27 08:14:26.767902764 +0000 UTC m=+1372.420258256" lastFinishedPulling="2026-01-27 08:14:29.223307862 +0000 UTC m=+1374.875663354" observedRunningTime="2026-01-27 08:14:29.813717446 +0000 UTC m=+1375.466072978" watchObservedRunningTime="2026-01-27 08:14:29.819868013 +0000 UTC m=+1375.472223515" Jan 27 08:14:35 crc kubenswrapper[4787]: I0127 08:14:35.858444 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:35 crc kubenswrapper[4787]: I0127 08:14:35.858903 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:35 crc kubenswrapper[4787]: I0127 08:14:35.898477 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:36 crc kubenswrapper[4787]: I0127 08:14:36.932534 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:36 crc kubenswrapper[4787]: I0127 08:14:36.982262 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdc6l"] Jan 27 08:14:38 crc kubenswrapper[4787]: I0127 08:14:38.874497 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sdc6l" podUID="8218445e-5e55-47de-8494-09626ae390d7" containerName="registry-server" containerID="cri-o://dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96" gracePeriod=2 Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.337225 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.419924 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-catalog-content\") pod \"8218445e-5e55-47de-8494-09626ae390d7\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.420285 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njqjn\" (UniqueName: \"kubernetes.io/projected/8218445e-5e55-47de-8494-09626ae390d7-kube-api-access-njqjn\") pod \"8218445e-5e55-47de-8494-09626ae390d7\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.420391 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-utilities\") pod \"8218445e-5e55-47de-8494-09626ae390d7\" (UID: \"8218445e-5e55-47de-8494-09626ae390d7\") " Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.421627 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-utilities" (OuterVolumeSpecName: "utilities") pod "8218445e-5e55-47de-8494-09626ae390d7" (UID: "8218445e-5e55-47de-8494-09626ae390d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.426534 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8218445e-5e55-47de-8494-09626ae390d7-kube-api-access-njqjn" (OuterVolumeSpecName: "kube-api-access-njqjn") pod "8218445e-5e55-47de-8494-09626ae390d7" (UID: "8218445e-5e55-47de-8494-09626ae390d7"). InnerVolumeSpecName "kube-api-access-njqjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.522245 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njqjn\" (UniqueName: \"kubernetes.io/projected/8218445e-5e55-47de-8494-09626ae390d7-kube-api-access-njqjn\") on node \"crc\" DevicePath \"\"" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.522611 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.546257 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8218445e-5e55-47de-8494-09626ae390d7" (UID: "8218445e-5e55-47de-8494-09626ae390d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.624241 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8218445e-5e55-47de-8494-09626ae390d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.883394 4787 generic.go:334] "Generic (PLEG): container finished" podID="8218445e-5e55-47de-8494-09626ae390d7" containerID="dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96" exitCode=0 Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.883466 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdc6l" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.883477 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdc6l" event={"ID":"8218445e-5e55-47de-8494-09626ae390d7","Type":"ContainerDied","Data":"dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96"} Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.883511 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdc6l" event={"ID":"8218445e-5e55-47de-8494-09626ae390d7","Type":"ContainerDied","Data":"3f35b98bcba954c309b775902bd7e82ed8b70ba8f32820c61d2416b6243a6a93"} Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.883544 4787 scope.go:117] "RemoveContainer" containerID="dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.912595 4787 scope.go:117] "RemoveContainer" containerID="e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.929784 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdc6l"] Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.938850 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sdc6l"] Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.941970 4787 scope.go:117] "RemoveContainer" containerID="280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.957218 4787 scope.go:117] "RemoveContainer" containerID="dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96" Jan 27 08:14:39 crc kubenswrapper[4787]: E0127 08:14:39.957687 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96\": container with ID starting with dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96 not found: ID does not exist" containerID="dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.957754 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96"} err="failed to get container status \"dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96\": rpc error: code = NotFound desc = could not find container \"dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96\": container with ID starting with dae4ac2676ec238e108d5af9769af1e39361238e7a77c3ff359cb911aa7cfe96 not found: ID does not exist" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.957787 4787 scope.go:117] "RemoveContainer" containerID="e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034" Jan 27 08:14:39 crc kubenswrapper[4787]: E0127 08:14:39.958224 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034\": container with ID starting with e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034 not found: ID does not exist" containerID="e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.958356 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034"} err="failed to get container status \"e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034\": rpc error: code = NotFound desc = could not find container \"e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034\": container with ID starting with e6bb138cca2f4e6fa156e54e99282b58c30f53856dc00f9a5d3f8a6670030034 not found: ID does not exist" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.958476 4787 scope.go:117] "RemoveContainer" containerID="280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc" Jan 27 08:14:39 crc kubenswrapper[4787]: E0127 08:14:39.958885 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc\": container with ID starting with 280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc not found: ID does not exist" containerID="280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc" Jan 27 08:14:39 crc kubenswrapper[4787]: I0127 08:14:39.958915 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc"} err="failed to get container status \"280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc\": rpc error: code = NotFound desc = could not find container \"280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc\": container with ID starting with 280a6824ecd02da11881b8f213c17337d293087aac7116df04d3b178db903adc not found: ID does not exist" Jan 27 08:14:41 crc kubenswrapper[4787]: I0127 08:14:41.086797 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8218445e-5e55-47de-8494-09626ae390d7" path="/var/lib/kubelet/pods/8218445e-5e55-47de-8494-09626ae390d7/volumes" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.488657 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4g7ns"] Jan 27 08:14:48 crc kubenswrapper[4787]: E0127 08:14:48.489973 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8218445e-5e55-47de-8494-09626ae390d7" containerName="registry-server" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.489993 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8218445e-5e55-47de-8494-09626ae390d7" containerName="registry-server" Jan 27 08:14:48 crc kubenswrapper[4787]: E0127 08:14:48.490031 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8218445e-5e55-47de-8494-09626ae390d7" containerName="extract-utilities" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.490039 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8218445e-5e55-47de-8494-09626ae390d7" containerName="extract-utilities" Jan 27 08:14:48 crc kubenswrapper[4787]: E0127 08:14:48.490060 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8218445e-5e55-47de-8494-09626ae390d7" containerName="extract-content" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.490067 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8218445e-5e55-47de-8494-09626ae390d7" containerName="extract-content" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.490262 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8218445e-5e55-47de-8494-09626ae390d7" containerName="registry-server" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.522722 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.540113 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g7ns"] Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.580885 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8ks\" (UniqueName: \"kubernetes.io/projected/28c90b47-9ec6-477a-a770-60904a0a4236-kube-api-access-vm8ks\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.581004 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-utilities\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.581068 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-catalog-content\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.682251 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm8ks\" (UniqueName: \"kubernetes.io/projected/28c90b47-9ec6-477a-a770-60904a0a4236-kube-api-access-vm8ks\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.682321 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-utilities\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.682360 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-catalog-content\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.682972 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-utilities\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.683230 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-catalog-content\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.704293 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm8ks\" (UniqueName: \"kubernetes.io/projected/28c90b47-9ec6-477a-a770-60904a0a4236-kube-api-access-vm8ks\") pod \"redhat-marketplace-4g7ns\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:48 crc kubenswrapper[4787]: I0127 08:14:48.849201 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:49 crc kubenswrapper[4787]: I0127 08:14:49.314470 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g7ns"] Jan 27 08:14:49 crc kubenswrapper[4787]: I0127 08:14:49.968109 4787 generic.go:334] "Generic (PLEG): container finished" podID="28c90b47-9ec6-477a-a770-60904a0a4236" containerID="8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1" exitCode=0 Jan 27 08:14:49 crc kubenswrapper[4787]: I0127 08:14:49.968188 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g7ns" event={"ID":"28c90b47-9ec6-477a-a770-60904a0a4236","Type":"ContainerDied","Data":"8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1"} Jan 27 08:14:49 crc kubenswrapper[4787]: I0127 08:14:49.968632 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g7ns" event={"ID":"28c90b47-9ec6-477a-a770-60904a0a4236","Type":"ContainerStarted","Data":"24234f9674cf861fa7b19408034aa2877fd4e5aaceb6d880efe0e608f15b8ca1"} Jan 27 08:14:50 crc kubenswrapper[4787]: I0127 08:14:50.978466 4787 generic.go:334] "Generic (PLEG): container finished" podID="28c90b47-9ec6-477a-a770-60904a0a4236" containerID="6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2" exitCode=0 Jan 27 08:14:50 crc kubenswrapper[4787]: I0127 08:14:50.978512 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g7ns" event={"ID":"28c90b47-9ec6-477a-a770-60904a0a4236","Type":"ContainerDied","Data":"6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2"} Jan 27 08:14:51 crc kubenswrapper[4787]: I0127 08:14:51.989734 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g7ns" event={"ID":"28c90b47-9ec6-477a-a770-60904a0a4236","Type":"ContainerStarted","Data":"3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085"} Jan 27 08:14:52 crc kubenswrapper[4787]: I0127 08:14:52.022755 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4g7ns" podStartSLOduration=2.5308242439999997 podStartE2EDuration="4.022725293s" podCreationTimestamp="2026-01-27 08:14:48 +0000 UTC" firstStartedPulling="2026-01-27 08:14:49.969949666 +0000 UTC m=+1395.622305158" lastFinishedPulling="2026-01-27 08:14:51.461850715 +0000 UTC m=+1397.114206207" observedRunningTime="2026-01-27 08:14:52.011537575 +0000 UTC m=+1397.663893087" watchObservedRunningTime="2026-01-27 08:14:52.022725293 +0000 UTC m=+1397.675080805" Jan 27 08:14:58 crc kubenswrapper[4787]: I0127 08:14:58.850109 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:58 crc kubenswrapper[4787]: I0127 08:14:58.850889 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:58 crc kubenswrapper[4787]: I0127 08:14:58.901422 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:59 crc kubenswrapper[4787]: I0127 08:14:59.085485 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:14:59 crc kubenswrapper[4787]: I0127 08:14:59.132943 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g7ns"] Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.148280 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr"] Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.149373 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.151851 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.152680 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.165157 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr"] Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.281512 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca43b74d-287b-422f-b051-db48f00b98ba-secret-volume\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.281802 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca43b74d-287b-422f-b051-db48f00b98ba-config-volume\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.281865 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44gkw\" (UniqueName: \"kubernetes.io/projected/ca43b74d-287b-422f-b051-db48f00b98ba-kube-api-access-44gkw\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.383284 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca43b74d-287b-422f-b051-db48f00b98ba-secret-volume\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.383792 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca43b74d-287b-422f-b051-db48f00b98ba-config-volume\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.383818 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44gkw\" (UniqueName: \"kubernetes.io/projected/ca43b74d-287b-422f-b051-db48f00b98ba-kube-api-access-44gkw\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.384695 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca43b74d-287b-422f-b051-db48f00b98ba-config-volume\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.389226 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca43b74d-287b-422f-b051-db48f00b98ba-secret-volume\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.407282 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44gkw\" (UniqueName: \"kubernetes.io/projected/ca43b74d-287b-422f-b051-db48f00b98ba-kube-api-access-44gkw\") pod \"collect-profiles-29491695-blvcr\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.478983 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:00 crc kubenswrapper[4787]: I0127 08:15:00.948900 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr"] Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.052930 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" event={"ID":"ca43b74d-287b-422f-b051-db48f00b98ba","Type":"ContainerStarted","Data":"68d809d5825cca82ed36cccdd71cd7b017f6ddf63d8759382338a0b3c2b08ffe"} Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.053106 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4g7ns" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" containerName="registry-server" containerID="cri-o://3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085" gracePeriod=2 Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.512441 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.607236 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm8ks\" (UniqueName: \"kubernetes.io/projected/28c90b47-9ec6-477a-a770-60904a0a4236-kube-api-access-vm8ks\") pod \"28c90b47-9ec6-477a-a770-60904a0a4236\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.607331 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-utilities\") pod \"28c90b47-9ec6-477a-a770-60904a0a4236\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.607372 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-catalog-content\") pod \"28c90b47-9ec6-477a-a770-60904a0a4236\" (UID: \"28c90b47-9ec6-477a-a770-60904a0a4236\") " Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.613891 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-utilities" (OuterVolumeSpecName: "utilities") pod "28c90b47-9ec6-477a-a770-60904a0a4236" (UID: "28c90b47-9ec6-477a-a770-60904a0a4236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.640649 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c90b47-9ec6-477a-a770-60904a0a4236-kube-api-access-vm8ks" (OuterVolumeSpecName: "kube-api-access-vm8ks") pod "28c90b47-9ec6-477a-a770-60904a0a4236" (UID: "28c90b47-9ec6-477a-a770-60904a0a4236"). InnerVolumeSpecName "kube-api-access-vm8ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.709095 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm8ks\" (UniqueName: \"kubernetes.io/projected/28c90b47-9ec6-477a-a770-60904a0a4236-kube-api-access-vm8ks\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.709331 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.748307 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28c90b47-9ec6-477a-a770-60904a0a4236" (UID: "28c90b47-9ec6-477a-a770-60904a0a4236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:15:01 crc kubenswrapper[4787]: I0127 08:15:01.811262 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c90b47-9ec6-477a-a770-60904a0a4236-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.062117 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca43b74d-287b-422f-b051-db48f00b98ba" containerID="ebf7c142fe0d519190036ee75ceaaf9a044d1a3b84003048844748f7dc5ffc20" exitCode=0 Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.062233 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" event={"ID":"ca43b74d-287b-422f-b051-db48f00b98ba","Type":"ContainerDied","Data":"ebf7c142fe0d519190036ee75ceaaf9a044d1a3b84003048844748f7dc5ffc20"} Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.064705 4787 generic.go:334] "Generic (PLEG): container finished" podID="28c90b47-9ec6-477a-a770-60904a0a4236" containerID="3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085" exitCode=0 Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.064753 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g7ns" event={"ID":"28c90b47-9ec6-477a-a770-60904a0a4236","Type":"ContainerDied","Data":"3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085"} Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.064781 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g7ns" event={"ID":"28c90b47-9ec6-477a-a770-60904a0a4236","Type":"ContainerDied","Data":"24234f9674cf861fa7b19408034aa2877fd4e5aaceb6d880efe0e608f15b8ca1"} Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.064790 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g7ns" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.064865 4787 scope.go:117] "RemoveContainer" containerID="3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.091337 4787 scope.go:117] "RemoveContainer" containerID="6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.107488 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g7ns"] Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.110646 4787 scope.go:117] "RemoveContainer" containerID="8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.126834 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g7ns"] Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.139477 4787 scope.go:117] "RemoveContainer" containerID="3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085" Jan 27 08:15:02 crc kubenswrapper[4787]: E0127 08:15:02.139936 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085\": container with ID starting with 3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085 not found: ID does not exist" containerID="3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.139983 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085"} err="failed to get container status \"3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085\": rpc error: code = NotFound desc = could not find container \"3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085\": container with ID starting with 3884bb8a07defe13492ec93fb1797ad29d963eecac9b644e081998844bc86085 not found: ID does not exist" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.140008 4787 scope.go:117] "RemoveContainer" containerID="6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2" Jan 27 08:15:02 crc kubenswrapper[4787]: E0127 08:15:02.140586 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2\": container with ID starting with 6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2 not found: ID does not exist" containerID="6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.140638 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2"} err="failed to get container status \"6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2\": rpc error: code = NotFound desc = could not find container \"6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2\": container with ID starting with 6ca8c474a5fe3de819fd795d3b2739afafc446c1cdd4220442f048ba2291eaf2 not found: ID does not exist" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.140674 4787 scope.go:117] "RemoveContainer" containerID="8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1" Jan 27 08:15:02 crc kubenswrapper[4787]: E0127 08:15:02.141162 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1\": container with ID starting with 8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1 not found: ID does not exist" containerID="8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1" Jan 27 08:15:02 crc kubenswrapper[4787]: I0127 08:15:02.141196 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1"} err="failed to get container status \"8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1\": rpc error: code = NotFound desc = could not find container \"8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1\": container with ID starting with 8154237b3ba079aff74e2c4c4e45d7909ce1ee4e70679a73d5b8e5f1e23426f1 not found: ID does not exist" Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.085903 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" path="/var/lib/kubelet/pods/28c90b47-9ec6-477a-a770-60904a0a4236/volumes" Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.408144 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.542905 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca43b74d-287b-422f-b051-db48f00b98ba-secret-volume\") pod \"ca43b74d-287b-422f-b051-db48f00b98ba\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.543076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca43b74d-287b-422f-b051-db48f00b98ba-config-volume\") pod \"ca43b74d-287b-422f-b051-db48f00b98ba\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.543116 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44gkw\" (UniqueName: \"kubernetes.io/projected/ca43b74d-287b-422f-b051-db48f00b98ba-kube-api-access-44gkw\") pod \"ca43b74d-287b-422f-b051-db48f00b98ba\" (UID: \"ca43b74d-287b-422f-b051-db48f00b98ba\") " Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.544489 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca43b74d-287b-422f-b051-db48f00b98ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "ca43b74d-287b-422f-b051-db48f00b98ba" (UID: "ca43b74d-287b-422f-b051-db48f00b98ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.550648 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca43b74d-287b-422f-b051-db48f00b98ba-kube-api-access-44gkw" (OuterVolumeSpecName: "kube-api-access-44gkw") pod "ca43b74d-287b-422f-b051-db48f00b98ba" (UID: "ca43b74d-287b-422f-b051-db48f00b98ba"). InnerVolumeSpecName "kube-api-access-44gkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.550948 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca43b74d-287b-422f-b051-db48f00b98ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ca43b74d-287b-422f-b051-db48f00b98ba" (UID: "ca43b74d-287b-422f-b051-db48f00b98ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.644901 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca43b74d-287b-422f-b051-db48f00b98ba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.645159 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca43b74d-287b-422f-b051-db48f00b98ba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:03 crc kubenswrapper[4787]: I0127 08:15:03.645318 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44gkw\" (UniqueName: \"kubernetes.io/projected/ca43b74d-287b-422f-b051-db48f00b98ba-kube-api-access-44gkw\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:04 crc kubenswrapper[4787]: I0127 08:15:04.081743 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" event={"ID":"ca43b74d-287b-422f-b051-db48f00b98ba","Type":"ContainerDied","Data":"68d809d5825cca82ed36cccdd71cd7b017f6ddf63d8759382338a0b3c2b08ffe"} Jan 27 08:15:04 crc kubenswrapper[4787]: I0127 08:15:04.081788 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d809d5825cca82ed36cccdd71cd7b017f6ddf63d8759382338a0b3c2b08ffe" Jan 27 08:15:04 crc kubenswrapper[4787]: I0127 08:15:04.081795 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-blvcr" Jan 27 08:15:22 crc kubenswrapper[4787]: I0127 08:15:22.822710 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:15:22 crc kubenswrapper[4787]: I0127 08:15:22.823346 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:15:35 crc kubenswrapper[4787]: I0127 08:15:35.753795 4787 scope.go:117] "RemoveContainer" containerID="d304505100a9e40f977481818afbc2239ed1910ed8a9349b8d4d47740a9431bb" Jan 27 08:15:52 crc kubenswrapper[4787]: I0127 08:15:52.823109 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:15:52 crc kubenswrapper[4787]: I0127 08:15:52.823878 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.483704 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kf9sw"] Jan 27 08:16:04 crc kubenswrapper[4787]: E0127 08:16:04.484918 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" containerName="extract-content" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.484937 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" containerName="extract-content" Jan 27 08:16:04 crc kubenswrapper[4787]: E0127 08:16:04.484967 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca43b74d-287b-422f-b051-db48f00b98ba" containerName="collect-profiles" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.485055 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca43b74d-287b-422f-b051-db48f00b98ba" containerName="collect-profiles" Jan 27 08:16:04 crc kubenswrapper[4787]: E0127 08:16:04.485078 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" containerName="registry-server" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.485088 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" containerName="registry-server" Jan 27 08:16:04 crc kubenswrapper[4787]: E0127 08:16:04.485110 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" containerName="extract-utilities" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.485120 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" containerName="extract-utilities" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.485328 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca43b74d-287b-422f-b051-db48f00b98ba" containerName="collect-profiles" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.485340 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c90b47-9ec6-477a-a770-60904a0a4236" containerName="registry-server" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.486993 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.507260 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf9sw"] Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.646131 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-utilities\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.646217 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-catalog-content\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.646247 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmbrw\" (UniqueName: \"kubernetes.io/projected/d4fe0298-627e-4958-8321-0ee99f5aa127-kube-api-access-dmbrw\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.747691 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-utilities\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.747797 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-catalog-content\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.747831 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmbrw\" (UniqueName: \"kubernetes.io/projected/d4fe0298-627e-4958-8321-0ee99f5aa127-kube-api-access-dmbrw\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.748387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-catalog-content\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.748514 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-utilities\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.767513 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmbrw\" (UniqueName: \"kubernetes.io/projected/d4fe0298-627e-4958-8321-0ee99f5aa127-kube-api-access-dmbrw\") pod \"community-operators-kf9sw\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:04 crc kubenswrapper[4787]: I0127 08:16:04.825429 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:05 crc kubenswrapper[4787]: I0127 08:16:05.172285 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf9sw"] Jan 27 08:16:05 crc kubenswrapper[4787]: I0127 08:16:05.590576 4787 generic.go:334] "Generic (PLEG): container finished" podID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerID="546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84" exitCode=0 Jan 27 08:16:05 crc kubenswrapper[4787]: I0127 08:16:05.590642 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf9sw" event={"ID":"d4fe0298-627e-4958-8321-0ee99f5aa127","Type":"ContainerDied","Data":"546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84"} Jan 27 08:16:05 crc kubenswrapper[4787]: I0127 08:16:05.591002 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf9sw" event={"ID":"d4fe0298-627e-4958-8321-0ee99f5aa127","Type":"ContainerStarted","Data":"a95fa1435edc718a694995255225adb88985d99c5cdf70d182120e41920d9610"} Jan 27 08:16:06 crc kubenswrapper[4787]: I0127 08:16:06.599719 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf9sw" event={"ID":"d4fe0298-627e-4958-8321-0ee99f5aa127","Type":"ContainerStarted","Data":"d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b"} Jan 27 08:16:07 crc kubenswrapper[4787]: I0127 08:16:07.609665 4787 generic.go:334] "Generic (PLEG): container finished" podID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerID="d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b" exitCode=0 Jan 27 08:16:07 crc kubenswrapper[4787]: I0127 08:16:07.609717 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf9sw" event={"ID":"d4fe0298-627e-4958-8321-0ee99f5aa127","Type":"ContainerDied","Data":"d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b"} Jan 27 08:16:08 crc kubenswrapper[4787]: I0127 08:16:08.618565 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf9sw" event={"ID":"d4fe0298-627e-4958-8321-0ee99f5aa127","Type":"ContainerStarted","Data":"857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d"} Jan 27 08:16:08 crc kubenswrapper[4787]: I0127 08:16:08.640875 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kf9sw" podStartSLOduration=2.127660977 podStartE2EDuration="4.64085023s" podCreationTimestamp="2026-01-27 08:16:04 +0000 UTC" firstStartedPulling="2026-01-27 08:16:05.591958618 +0000 UTC m=+1471.244314110" lastFinishedPulling="2026-01-27 08:16:08.105147861 +0000 UTC m=+1473.757503363" observedRunningTime="2026-01-27 08:16:08.638262327 +0000 UTC m=+1474.290617839" watchObservedRunningTime="2026-01-27 08:16:08.64085023 +0000 UTC m=+1474.293205722" Jan 27 08:16:14 crc kubenswrapper[4787]: I0127 08:16:14.826239 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:14 crc kubenswrapper[4787]: I0127 08:16:14.826904 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:14 crc kubenswrapper[4787]: I0127 08:16:14.870025 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:15 crc kubenswrapper[4787]: I0127 08:16:15.734278 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:15 crc kubenswrapper[4787]: I0127 08:16:15.816640 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kf9sw"] Jan 27 08:16:17 crc kubenswrapper[4787]: I0127 08:16:17.689958 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kf9sw" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerName="registry-server" containerID="cri-o://857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d" gracePeriod=2 Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.103580 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.207197 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmbrw\" (UniqueName: \"kubernetes.io/projected/d4fe0298-627e-4958-8321-0ee99f5aa127-kube-api-access-dmbrw\") pod \"d4fe0298-627e-4958-8321-0ee99f5aa127\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.207781 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-catalog-content\") pod \"d4fe0298-627e-4958-8321-0ee99f5aa127\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.207955 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-utilities\") pod \"d4fe0298-627e-4958-8321-0ee99f5aa127\" (UID: \"d4fe0298-627e-4958-8321-0ee99f5aa127\") " Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.209537 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-utilities" (OuterVolumeSpecName: "utilities") pod "d4fe0298-627e-4958-8321-0ee99f5aa127" (UID: "d4fe0298-627e-4958-8321-0ee99f5aa127"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.217756 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fe0298-627e-4958-8321-0ee99f5aa127-kube-api-access-dmbrw" (OuterVolumeSpecName: "kube-api-access-dmbrw") pod "d4fe0298-627e-4958-8321-0ee99f5aa127" (UID: "d4fe0298-627e-4958-8321-0ee99f5aa127"). InnerVolumeSpecName "kube-api-access-dmbrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.260405 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4fe0298-627e-4958-8321-0ee99f5aa127" (UID: "d4fe0298-627e-4958-8321-0ee99f5aa127"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.310430 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmbrw\" (UniqueName: \"kubernetes.io/projected/d4fe0298-627e-4958-8321-0ee99f5aa127-kube-api-access-dmbrw\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.310454 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.310466 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fe0298-627e-4958-8321-0ee99f5aa127-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.700826 4787 generic.go:334] "Generic (PLEG): container finished" podID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerID="857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d" exitCode=0 Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.700880 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf9sw" event={"ID":"d4fe0298-627e-4958-8321-0ee99f5aa127","Type":"ContainerDied","Data":"857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d"} Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.700911 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf9sw" event={"ID":"d4fe0298-627e-4958-8321-0ee99f5aa127","Type":"ContainerDied","Data":"a95fa1435edc718a694995255225adb88985d99c5cdf70d182120e41920d9610"} Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.700934 4787 scope.go:117] "RemoveContainer" containerID="857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.700948 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf9sw" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.735047 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kf9sw"] Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.735257 4787 scope.go:117] "RemoveContainer" containerID="d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.741299 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kf9sw"] Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.757888 4787 scope.go:117] "RemoveContainer" containerID="546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.788255 4787 scope.go:117] "RemoveContainer" containerID="857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d" Jan 27 08:16:18 crc kubenswrapper[4787]: E0127 08:16:18.789919 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d\": container with ID starting with 857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d not found: ID does not exist" containerID="857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.789983 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d"} err="failed to get container status \"857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d\": rpc error: code = NotFound desc = could not find container \"857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d\": container with ID starting with 857641d24fd05870cf3b38ba69796ac9d253587fbf47c7a878304718b743001d not found: ID does not exist" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.790024 4787 scope.go:117] "RemoveContainer" containerID="d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b" Jan 27 08:16:18 crc kubenswrapper[4787]: E0127 08:16:18.790468 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b\": container with ID starting with d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b not found: ID does not exist" containerID="d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.790492 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b"} err="failed to get container status \"d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b\": rpc error: code = NotFound desc = could not find container \"d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b\": container with ID starting with d56cb21340678a3fcbfc0a9222f8a333c974898b22f227e88d5d4124b6a67e7b not found: ID does not exist" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.790507 4787 scope.go:117] "RemoveContainer" containerID="546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84" Jan 27 08:16:18 crc kubenswrapper[4787]: E0127 08:16:18.790846 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84\": container with ID starting with 546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84 not found: ID does not exist" containerID="546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84" Jan 27 08:16:18 crc kubenswrapper[4787]: I0127 08:16:18.790864 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84"} err="failed to get container status \"546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84\": rpc error: code = NotFound desc = could not find container \"546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84\": container with ID starting with 546e611ab78f2ee068246cd0fcffa66f929314c19a038ba5bf735069a78b5d84 not found: ID does not exist" Jan 27 08:16:19 crc kubenswrapper[4787]: I0127 08:16:19.087708 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" path="/var/lib/kubelet/pods/d4fe0298-627e-4958-8321-0ee99f5aa127/volumes" Jan 27 08:16:22 crc kubenswrapper[4787]: I0127 08:16:22.823383 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:16:22 crc kubenswrapper[4787]: I0127 08:16:22.823858 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:16:22 crc kubenswrapper[4787]: I0127 08:16:22.823925 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 08:16:22 crc kubenswrapper[4787]: I0127 08:16:22.824944 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66c62556ef822841ab93c99b23fda5b0b3ea129b94ebcca6f4cff0b2d8c7f32e"} pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:16:22 crc kubenswrapper[4787]: I0127 08:16:22.825011 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" containerID="cri-o://66c62556ef822841ab93c99b23fda5b0b3ea129b94ebcca6f4cff0b2d8c7f32e" gracePeriod=600 Jan 27 08:16:23 crc kubenswrapper[4787]: I0127 08:16:23.743328 4787 generic.go:334] "Generic (PLEG): container finished" podID="f051e184-acac-47cf-9e04-9df648288715" containerID="66c62556ef822841ab93c99b23fda5b0b3ea129b94ebcca6f4cff0b2d8c7f32e" exitCode=0 Jan 27 08:16:23 crc kubenswrapper[4787]: I0127 08:16:23.743585 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerDied","Data":"66c62556ef822841ab93c99b23fda5b0b3ea129b94ebcca6f4cff0b2d8c7f32e"} Jan 27 08:16:23 crc kubenswrapper[4787]: I0127 08:16:23.743700 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e"} Jan 27 08:16:23 crc kubenswrapper[4787]: I0127 08:16:23.743747 4787 scope.go:117] "RemoveContainer" containerID="62fa4f58004172098a708acf93c4ba2c2d75c5b2ad098327437e26bfa28ab448" Jan 27 08:16:35 crc kubenswrapper[4787]: I0127 08:16:35.891471 4787 scope.go:117] "RemoveContainer" containerID="50c79c9d4510d9aad33c8c6b7d7b07dfbec318be7cf6d78cc13f3f9ebe9f87c3" Jan 27 08:16:37 crc kubenswrapper[4787]: I0127 08:16:37.180418 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-5828-account-create-update-xh7r6_646de171-9e06-4df6-9e1e-07319c04b95c/mariadb-account-create-update/0.log" Jan 27 08:16:37 crc kubenswrapper[4787]: I0127 08:16:37.708427 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-678dff9ff-k6jwl_91494b32-1284-49fb-b548-3fbc5f1e1ddf/keystone-api/0.log" Jan 27 08:16:38 crc kubenswrapper[4787]: I0127 08:16:38.247682 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-bootstrap-bn5rd_6d885638-7008-41c1-85fc-43025469e404/keystone-bootstrap/0.log" Jan 27 08:16:38 crc kubenswrapper[4787]: I0127 08:16:38.746204 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-create-wjmqh_205e2c3c-82ef-4aaa-b1b3-0465a855bde9/mariadb-database-create/0.log" Jan 27 08:16:39 crc kubenswrapper[4787]: I0127 08:16:39.245076 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-sync-ztn8q_c5da7883-eba0-4fa4-92c5-9085b0ce0090/keystone-db-sync/0.log" Jan 27 08:16:39 crc kubenswrapper[4787]: I0127 08:16:39.943580 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_b0fc932b-0d21-426c-9b58-15c14ad9e95b/memcached/0.log" Jan 27 08:16:40 crc kubenswrapper[4787]: I0127 08:16:40.455197 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_841b999e-6e42-4f28-8fd8-334f69c3e3e6/galera/0.log" Jan 27 08:16:40 crc kubenswrapper[4787]: I0127 08:16:40.987120 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_144491fe-49b9-4a76-8da2-db798cf6a1e4/galera/0.log" Jan 27 08:16:41 crc kubenswrapper[4787]: I0127 08:16:41.512534 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_4826cdba-006d-447f-a206-367bd1dc8893/openstackclient/0.log" Jan 27 08:16:42 crc kubenswrapper[4787]: I0127 08:16:42.058427 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-545ff75684-kqpgf_7d647302-7ff9-465d-b5a5-a3a76f35df28/placement-log/0.log" Jan 27 08:16:42 crc kubenswrapper[4787]: I0127 08:16:42.561312 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-6d8d-account-create-update-5jvm9_2e3d75aa-1156-48f5-9435-b8b3cd2ec555/mariadb-account-create-update/0.log" Jan 27 08:16:43 crc kubenswrapper[4787]: I0127 08:16:43.034766 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-create-8rt5v_7ba9e1b4-f385-47f9-80f6-508a3406aa2a/mariadb-database-create/0.log" Jan 27 08:16:43 crc kubenswrapper[4787]: I0127 08:16:43.397124 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-sync-7x42z_54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8/placement-db-sync/0.log" Jan 27 08:16:43 crc kubenswrapper[4787]: I0127 08:16:43.831951 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_68be2504-1de8-427c-9937-bd0428f7c5c4/rabbitmq/0.log" Jan 27 08:16:44 crc kubenswrapper[4787]: I0127 08:16:44.346921 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_faff0a15-880a-4cf7-a0e0-81d573ace274/rabbitmq/0.log" Jan 27 08:16:44 crc kubenswrapper[4787]: I0127 08:16:44.768041 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_1d9a6f95-2510-4e74-b4f7-fb592d761c91/rabbitmq/0.log" Jan 27 08:16:45 crc kubenswrapper[4787]: I0127 08:16:45.206185 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_f84bfff4-b4f2-40d8-8b81-a9e5eb776442/rabbitmq/0.log" Jan 27 08:16:45 crc kubenswrapper[4787]: I0127 08:16:45.595656 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_root-account-create-update-b4lsd_c3150913-c395-4ed0-a5d3-616426f58671/mariadb-account-create-update/0.log" Jan 27 08:17:16 crc kubenswrapper[4787]: I0127 08:17:16.415610 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/extract/0.log" Jan 27 08:17:16 crc kubenswrapper[4787]: I0127 08:17:16.859236 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-d5r5z_4b5515cb-f539-4a9b-8c46-36c5c62c5c93/manager/0.log" Jan 27 08:17:17 crc kubenswrapper[4787]: I0127 08:17:17.305299 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-6tnpg_85bafad5-30f3-4931-bd2a-0d45e2b0f844/manager/0.log" Jan 27 08:17:17 crc kubenswrapper[4787]: I0127 08:17:17.731699 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-627g6_dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65/manager/0.log" Jan 27 08:17:18 crc kubenswrapper[4787]: I0127 08:17:18.164082 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/extract/0.log" Jan 27 08:17:18 crc kubenswrapper[4787]: I0127 08:17:18.578583 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-kk6mj_be4be3e5-1589-4e84-9d35-f104bc3d5ad4/manager/0.log" Jan 27 08:17:18 crc kubenswrapper[4787]: I0127 08:17:18.996662 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-57zhq_f621b9f3-ead9-4fab-b33f-f9d6179e8f3f/manager/0.log" Jan 27 08:17:19 crc kubenswrapper[4787]: I0127 08:17:19.405660 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vkmnm_a21cc0b9-75d2-4ddb-925e-23eeaac2dd35/manager/0.log" Jan 27 08:17:19 crc kubenswrapper[4787]: I0127 08:17:19.873602 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-drtqw_c634491f-6069-472d-bff7-d8903d0afa1d/manager/0.log" Jan 27 08:17:20 crc kubenswrapper[4787]: I0127 08:17:20.296484 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-j7klk_ff52e5a2-2aab-451a-8869-72c1a506940a/manager/0.log" Jan 27 08:17:20 crc kubenswrapper[4787]: I0127 08:17:20.743524 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-2k8pd_32019c4d-61e3-4c27-86d4-3a79bb40ce70/manager/0.log" Jan 27 08:17:21 crc kubenswrapper[4787]: I0127 08:17:21.157052 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-tbr5m_59efc0ff-5727-48f9-91b7-36533ba5f94a/manager/0.log" Jan 27 08:17:21 crc kubenswrapper[4787]: I0127 08:17:21.601600 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-88gn4_e59e94f5-9033-488d-a186-476cb6cbb3f0/manager/0.log" Jan 27 08:17:22 crc kubenswrapper[4787]: I0127 08:17:22.044565 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-vk5vt_be114c8e-3aa0-41b4-9954-d07c800d3cfc/manager/0.log" Jan 27 08:17:22 crc kubenswrapper[4787]: I0127 08:17:22.500153 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57d6b69d8b-j9nxt_b0a80c53-be1f-444d-a49a-05c89daad297/manager/0.log" Jan 27 08:17:22 crc kubenswrapper[4787]: I0127 08:17:22.957043 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-2jn9v_804d50f1-7da5-4d0c-8c0e-cac4c5e5a244/registry-server/0.log" Jan 27 08:17:23 crc kubenswrapper[4787]: I0127 08:17:23.376800 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-x4nz5_601fe40b-5553-4225-b5bd-214428d6fa68/manager/0.log" Jan 27 08:17:23 crc kubenswrapper[4787]: I0127 08:17:23.794225 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq_116b0532-a8d5-47ec-8e12-e7ef482094d6/manager/0.log" Jan 27 08:17:24 crc kubenswrapper[4787]: I0127 08:17:24.515265 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-c595dbb59-xc2xn_7a0ecc5c-eebe-457b-9a72-804226878c7f/manager/0.log" Jan 27 08:17:24 crc kubenswrapper[4787]: I0127 08:17:24.967871 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cnfx4_ba39dacd-d1da-4a2b-a4b9-fa0e917986f0/registry-server/0.log" Jan 27 08:17:25 crc kubenswrapper[4787]: I0127 08:17:25.357635 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-l9mc7_7cae616e-6aa8-405f-b10a-2a5346fae5b4/manager/0.log" Jan 27 08:17:25 crc kubenswrapper[4787]: I0127 08:17:25.801989 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qhnz4_a552e9e7-f9fa-4b71-9ec6-848a2230279f/manager/0.log" Jan 27 08:17:26 crc kubenswrapper[4787]: I0127 08:17:26.270515 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5lxsh_2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7/operator/0.log" Jan 27 08:17:26 crc kubenswrapper[4787]: I0127 08:17:26.670543 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pxbjw_89d7a7e7-443a-486f-b8b7-a024082b9fba/manager/0.log" Jan 27 08:17:27 crc kubenswrapper[4787]: I0127 08:17:27.111198 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-cqpjk_7986c3dd-6d06-4892-9fed-1b396669685b/manager/0.log" Jan 27 08:17:27 crc kubenswrapper[4787]: I0127 08:17:27.504186 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-dnxpj_7ec92388-f799-43cc-9235-6d4b717bc98e/manager/0.log" Jan 27 08:17:27 crc kubenswrapper[4787]: I0127 08:17:27.928345 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75db85654f-8cmb9_292c617a-b64f-4b53-b05e-360769308e43/manager/0.log" Jan 27 08:17:32 crc kubenswrapper[4787]: I0127 08:17:32.715102 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-5828-account-create-update-xh7r6_646de171-9e06-4df6-9e1e-07319c04b95c/mariadb-account-create-update/0.log" Jan 27 08:17:33 crc kubenswrapper[4787]: I0127 08:17:33.239030 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-678dff9ff-k6jwl_91494b32-1284-49fb-b548-3fbc5f1e1ddf/keystone-api/0.log" Jan 27 08:17:33 crc kubenswrapper[4787]: I0127 08:17:33.769839 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-bootstrap-bn5rd_6d885638-7008-41c1-85fc-43025469e404/keystone-bootstrap/0.log" Jan 27 08:17:34 crc kubenswrapper[4787]: I0127 08:17:34.295668 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-create-wjmqh_205e2c3c-82ef-4aaa-b1b3-0465a855bde9/mariadb-database-create/0.log" Jan 27 08:17:34 crc kubenswrapper[4787]: I0127 08:17:34.797457 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-sync-ztn8q_c5da7883-eba0-4fa4-92c5-9085b0ce0090/keystone-db-sync/0.log" Jan 27 08:17:35 crc kubenswrapper[4787]: I0127 08:17:35.456519 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_b0fc932b-0d21-426c-9b58-15c14ad9e95b/memcached/0.log" Jan 27 08:17:36 crc kubenswrapper[4787]: I0127 08:17:36.014852 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_841b999e-6e42-4f28-8fd8-334f69c3e3e6/galera/0.log" Jan 27 08:17:36 crc kubenswrapper[4787]: I0127 08:17:36.547637 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_144491fe-49b9-4a76-8da2-db798cf6a1e4/galera/0.log" Jan 27 08:17:37 crc kubenswrapper[4787]: I0127 08:17:37.073199 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_4826cdba-006d-447f-a206-367bd1dc8893/openstackclient/0.log" Jan 27 08:17:37 crc kubenswrapper[4787]: I0127 08:17:37.660253 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-545ff75684-kqpgf_7d647302-7ff9-465d-b5a5-a3a76f35df28/placement-log/0.log" Jan 27 08:17:38 crc kubenswrapper[4787]: I0127 08:17:38.218844 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-6d8d-account-create-update-5jvm9_2e3d75aa-1156-48f5-9435-b8b3cd2ec555/mariadb-account-create-update/0.log" Jan 27 08:17:38 crc kubenswrapper[4787]: I0127 08:17:38.663381 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-create-8rt5v_7ba9e1b4-f385-47f9-80f6-508a3406aa2a/mariadb-database-create/0.log" Jan 27 08:17:39 crc kubenswrapper[4787]: I0127 08:17:39.096838 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-sync-7x42z_54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8/placement-db-sync/0.log" Jan 27 08:17:39 crc kubenswrapper[4787]: I0127 08:17:39.547275 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_68be2504-1de8-427c-9937-bd0428f7c5c4/rabbitmq/0.log" Jan 27 08:17:39 crc kubenswrapper[4787]: I0127 08:17:39.987762 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_faff0a15-880a-4cf7-a0e0-81d573ace274/rabbitmq/0.log" Jan 27 08:17:40 crc kubenswrapper[4787]: I0127 08:17:40.394888 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_1d9a6f95-2510-4e74-b4f7-fb592d761c91/rabbitmq/0.log" Jan 27 08:17:40 crc kubenswrapper[4787]: I0127 08:17:40.852498 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_f84bfff4-b4f2-40d8-8b81-a9e5eb776442/rabbitmq/0.log" Jan 27 08:17:41 crc kubenswrapper[4787]: I0127 08:17:41.315653 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_root-account-create-update-b4lsd_c3150913-c395-4ed0-a5d3-616426f58671/mariadb-account-create-update/0.log" Jan 27 08:18:12 crc kubenswrapper[4787]: I0127 08:18:12.680090 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/extract/0.log" Jan 27 08:18:13 crc kubenswrapper[4787]: I0127 08:18:13.099649 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-d5r5z_4b5515cb-f539-4a9b-8c46-36c5c62c5c93/manager/0.log" Jan 27 08:18:13 crc kubenswrapper[4787]: I0127 08:18:13.523475 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-6tnpg_85bafad5-30f3-4931-bd2a-0d45e2b0f844/manager/0.log" Jan 27 08:18:13 crc kubenswrapper[4787]: I0127 08:18:13.914225 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-627g6_dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65/manager/0.log" Jan 27 08:18:14 crc kubenswrapper[4787]: I0127 08:18:14.340827 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/extract/0.log" Jan 27 08:18:14 crc kubenswrapper[4787]: I0127 08:18:14.822837 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-kk6mj_be4be3e5-1589-4e84-9d35-f104bc3d5ad4/manager/0.log" Jan 27 08:18:15 crc kubenswrapper[4787]: I0127 08:18:15.214708 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-57zhq_f621b9f3-ead9-4fab-b33f-f9d6179e8f3f/manager/0.log" Jan 27 08:18:15 crc kubenswrapper[4787]: I0127 08:18:15.601152 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vkmnm_a21cc0b9-75d2-4ddb-925e-23eeaac2dd35/manager/0.log" Jan 27 08:18:16 crc kubenswrapper[4787]: I0127 08:18:16.090095 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-drtqw_c634491f-6069-472d-bff7-d8903d0afa1d/manager/0.log" Jan 27 08:18:16 crc kubenswrapper[4787]: I0127 08:18:16.539510 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-j7klk_ff52e5a2-2aab-451a-8869-72c1a506940a/manager/0.log" Jan 27 08:18:16 crc kubenswrapper[4787]: I0127 08:18:16.994374 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-2k8pd_32019c4d-61e3-4c27-86d4-3a79bb40ce70/manager/0.log" Jan 27 08:18:17 crc kubenswrapper[4787]: I0127 08:18:17.380005 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-tbr5m_59efc0ff-5727-48f9-91b7-36533ba5f94a/manager/0.log" Jan 27 08:18:17 crc kubenswrapper[4787]: I0127 08:18:17.782357 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-88gn4_e59e94f5-9033-488d-a186-476cb6cbb3f0/manager/0.log" Jan 27 08:18:18 crc kubenswrapper[4787]: I0127 08:18:18.184038 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-vk5vt_be114c8e-3aa0-41b4-9954-d07c800d3cfc/manager/0.log" Jan 27 08:18:18 crc kubenswrapper[4787]: I0127 08:18:18.621193 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57d6b69d8b-j9nxt_b0a80c53-be1f-444d-a49a-05c89daad297/manager/0.log" Jan 27 08:18:19 crc kubenswrapper[4787]: I0127 08:18:19.024369 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-2jn9v_804d50f1-7da5-4d0c-8c0e-cac4c5e5a244/registry-server/0.log" Jan 27 08:18:19 crc kubenswrapper[4787]: I0127 08:18:19.442853 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-x4nz5_601fe40b-5553-4225-b5bd-214428d6fa68/manager/0.log" Jan 27 08:18:19 crc kubenswrapper[4787]: I0127 08:18:19.883539 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq_116b0532-a8d5-47ec-8e12-e7ef482094d6/manager/0.log" Jan 27 08:18:20 crc kubenswrapper[4787]: I0127 08:18:20.453080 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-c595dbb59-xc2xn_7a0ecc5c-eebe-457b-9a72-804226878c7f/manager/0.log" Jan 27 08:18:20 crc kubenswrapper[4787]: I0127 08:18:20.876912 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cnfx4_ba39dacd-d1da-4a2b-a4b9-fa0e917986f0/registry-server/0.log" Jan 27 08:18:21 crc kubenswrapper[4787]: I0127 08:18:21.299924 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-l9mc7_7cae616e-6aa8-405f-b10a-2a5346fae5b4/manager/0.log" Jan 27 08:18:21 crc kubenswrapper[4787]: I0127 08:18:21.735033 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qhnz4_a552e9e7-f9fa-4b71-9ec6-848a2230279f/manager/0.log" Jan 27 08:18:22 crc kubenswrapper[4787]: I0127 08:18:22.127157 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5lxsh_2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7/operator/0.log" Jan 27 08:18:22 crc kubenswrapper[4787]: I0127 08:18:22.526247 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pxbjw_89d7a7e7-443a-486f-b8b7-a024082b9fba/manager/0.log" Jan 27 08:18:22 crc kubenswrapper[4787]: I0127 08:18:22.979603 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-cqpjk_7986c3dd-6d06-4892-9fed-1b396669685b/manager/0.log" Jan 27 08:18:23 crc kubenswrapper[4787]: I0127 08:18:23.428194 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-dnxpj_7ec92388-f799-43cc-9235-6d4b717bc98e/manager/0.log" Jan 27 08:18:23 crc kubenswrapper[4787]: I0127 08:18:23.878443 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75db85654f-8cmb9_292c617a-b64f-4b53-b05e-360769308e43/manager/0.log" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.374309 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zjjpz/must-gather-hldr6"] Jan 27 08:18:46 crc kubenswrapper[4787]: E0127 08:18:46.375233 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerName="extract-utilities" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.375247 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerName="extract-utilities" Jan 27 08:18:46 crc kubenswrapper[4787]: E0127 08:18:46.375260 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerName="registry-server" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.375266 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerName="registry-server" Jan 27 08:18:46 crc kubenswrapper[4787]: E0127 08:18:46.375283 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerName="extract-content" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.375289 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerName="extract-content" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.375426 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fe0298-627e-4958-8321-0ee99f5aa127" containerName="registry-server" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.376350 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.379247 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zjjpz"/"openshift-service-ca.crt" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.379498 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zjjpz"/"kube-root-ca.crt" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.448117 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zjjpz/must-gather-hldr6"] Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.495872 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwnk\" (UniqueName: \"kubernetes.io/projected/8f8dfa07-27b1-450f-9019-56d0ced6d238-kube-api-access-lrwnk\") pod \"must-gather-hldr6\" (UID: \"8f8dfa07-27b1-450f-9019-56d0ced6d238\") " pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.495960 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f8dfa07-27b1-450f-9019-56d0ced6d238-must-gather-output\") pod \"must-gather-hldr6\" (UID: \"8f8dfa07-27b1-450f-9019-56d0ced6d238\") " pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.598471 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrwnk\" (UniqueName: \"kubernetes.io/projected/8f8dfa07-27b1-450f-9019-56d0ced6d238-kube-api-access-lrwnk\") pod \"must-gather-hldr6\" (UID: \"8f8dfa07-27b1-450f-9019-56d0ced6d238\") " pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.598608 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f8dfa07-27b1-450f-9019-56d0ced6d238-must-gather-output\") pod \"must-gather-hldr6\" (UID: \"8f8dfa07-27b1-450f-9019-56d0ced6d238\") " pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.599161 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f8dfa07-27b1-450f-9019-56d0ced6d238-must-gather-output\") pod \"must-gather-hldr6\" (UID: \"8f8dfa07-27b1-450f-9019-56d0ced6d238\") " pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.622303 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrwnk\" (UniqueName: \"kubernetes.io/projected/8f8dfa07-27b1-450f-9019-56d0ced6d238-kube-api-access-lrwnk\") pod \"must-gather-hldr6\" (UID: \"8f8dfa07-27b1-450f-9019-56d0ced6d238\") " pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:18:46 crc kubenswrapper[4787]: I0127 08:18:46.697075 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:18:47 crc kubenswrapper[4787]: I0127 08:18:47.177787 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zjjpz/must-gather-hldr6"] Jan 27 08:18:47 crc kubenswrapper[4787]: I0127 08:18:47.814172 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjjpz/must-gather-hldr6" event={"ID":"8f8dfa07-27b1-450f-9019-56d0ced6d238","Type":"ContainerStarted","Data":"85e2309321b5bceb60627051cc544310f95b011ce3da3e2ae2a9b84746e8c2f3"} Jan 27 08:18:52 crc kubenswrapper[4787]: I0127 08:18:52.823080 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:18:52 crc kubenswrapper[4787]: I0127 08:18:52.823979 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:18:54 crc kubenswrapper[4787]: I0127 08:18:54.872620 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjjpz/must-gather-hldr6" event={"ID":"8f8dfa07-27b1-450f-9019-56d0ced6d238","Type":"ContainerStarted","Data":"387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380"} Jan 27 08:18:54 crc kubenswrapper[4787]: I0127 08:18:54.873369 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjjpz/must-gather-hldr6" event={"ID":"8f8dfa07-27b1-450f-9019-56d0ced6d238","Type":"ContainerStarted","Data":"f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156"} Jan 27 08:18:54 crc kubenswrapper[4787]: I0127 08:18:54.886690 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zjjpz/must-gather-hldr6" podStartSLOduration=2.278290574 podStartE2EDuration="8.88667474s" podCreationTimestamp="2026-01-27 08:18:46 +0000 UTC" firstStartedPulling="2026-01-27 08:18:47.183386081 +0000 UTC m=+1632.835741573" lastFinishedPulling="2026-01-27 08:18:53.791770237 +0000 UTC m=+1639.444125739" observedRunningTime="2026-01-27 08:18:54.885023259 +0000 UTC m=+1640.537378751" watchObservedRunningTime="2026-01-27 08:18:54.88667474 +0000 UTC m=+1640.539030232" Jan 27 08:19:22 crc kubenswrapper[4787]: I0127 08:19:22.822668 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:19:22 crc kubenswrapper[4787]: I0127 08:19:22.823266 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:19:37 crc kubenswrapper[4787]: I0127 08:19:37.053242 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-b4lsd"] Jan 27 08:19:37 crc kubenswrapper[4787]: I0127 08:19:37.061777 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-b4lsd"] Jan 27 08:19:37 crc kubenswrapper[4787]: I0127 08:19:37.087117 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3150913-c395-4ed0-a5d3-616426f58671" path="/var/lib/kubelet/pods/c3150913-c395-4ed0-a5d3-616426f58671/volumes" Jan 27 08:19:38 crc kubenswrapper[4787]: I0127 08:19:38.028317 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-5828-account-create-update-xh7r6"] Jan 27 08:19:38 crc kubenswrapper[4787]: I0127 08:19:38.036700 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-8rt5v"] Jan 27 08:19:38 crc kubenswrapper[4787]: I0127 08:19:38.045226 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-wjmqh"] Jan 27 08:19:38 crc kubenswrapper[4787]: I0127 08:19:38.053410 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-6d8d-account-create-update-5jvm9"] Jan 27 08:19:38 crc kubenswrapper[4787]: I0127 08:19:38.062200 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-5828-account-create-update-xh7r6"] Jan 27 08:19:38 crc kubenswrapper[4787]: I0127 08:19:38.067460 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-8rt5v"] Jan 27 08:19:38 crc kubenswrapper[4787]: I0127 08:19:38.072542 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-6d8d-account-create-update-5jvm9"] Jan 27 08:19:38 crc kubenswrapper[4787]: I0127 08:19:38.079146 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-wjmqh"] Jan 27 08:19:39 crc kubenswrapper[4787]: I0127 08:19:39.092138 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205e2c3c-82ef-4aaa-b1b3-0465a855bde9" path="/var/lib/kubelet/pods/205e2c3c-82ef-4aaa-b1b3-0465a855bde9/volumes" Jan 27 08:19:39 crc kubenswrapper[4787]: I0127 08:19:39.093785 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3d75aa-1156-48f5-9435-b8b3cd2ec555" path="/var/lib/kubelet/pods/2e3d75aa-1156-48f5-9435-b8b3cd2ec555/volumes" Jan 27 08:19:39 crc kubenswrapper[4787]: I0127 08:19:39.094779 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646de171-9e06-4df6-9e1e-07319c04b95c" path="/var/lib/kubelet/pods/646de171-9e06-4df6-9e1e-07319c04b95c/volumes" Jan 27 08:19:39 crc kubenswrapper[4787]: I0127 08:19:39.095727 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba9e1b4-f385-47f9-80f6-508a3406aa2a" path="/var/lib/kubelet/pods/7ba9e1b4-f385-47f9-80f6-508a3406aa2a/volumes" Jan 27 08:19:52 crc kubenswrapper[4787]: I0127 08:19:52.822664 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:19:52 crc kubenswrapper[4787]: I0127 08:19:52.823011 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:19:52 crc kubenswrapper[4787]: I0127 08:19:52.823063 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" Jan 27 08:19:52 crc kubenswrapper[4787]: I0127 08:19:52.823810 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e"} pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:19:52 crc kubenswrapper[4787]: I0127 08:19:52.823868 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" containerID="cri-o://125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" gracePeriod=600 Jan 27 08:19:52 crc kubenswrapper[4787]: E0127 08:19:52.947712 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:19:53 crc kubenswrapper[4787]: I0127 08:19:53.274335 4787 generic.go:334] "Generic (PLEG): container finished" podID="f051e184-acac-47cf-9e04-9df648288715" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" exitCode=0 Jan 27 08:19:53 crc kubenswrapper[4787]: I0127 08:19:53.274388 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerDied","Data":"125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e"} Jan 27 08:19:53 crc kubenswrapper[4787]: I0127 08:19:53.274436 4787 scope.go:117] "RemoveContainer" containerID="66c62556ef822841ab93c99b23fda5b0b3ea129b94ebcca6f4cff0b2d8c7f32e" Jan 27 08:19:53 crc kubenswrapper[4787]: I0127 08:19:53.275038 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:19:53 crc kubenswrapper[4787]: E0127 08:19:53.275376 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.180126 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/util/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.180130 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/util/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.181035 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/pull/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.276710 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/pull/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.514562 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/util/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.519129 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/extract/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.578289 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_666ed4883fb88ff51f70cbd0ee285bf2a10ae3509c94c96f2548a58c3bzxmp2_46a8c934-ec5d-48f9-8c39-f4e4c730d7a5/pull/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.738230 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-d5r5z_4b5515cb-f539-4a9b-8c46-36c5c62c5c93/manager/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.775464 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-6tnpg_85bafad5-30f3-4931-bd2a-0d45e2b0f844/manager/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.948261 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-627g6_dd8dbc24-8fb8-4ae2-96c3-b7112c7d7c65/manager/0.log" Jan 27 08:20:00 crc kubenswrapper[4787]: I0127 08:20:00.979720 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/util/0.log" Jan 27 08:20:01 crc kubenswrapper[4787]: I0127 08:20:01.195812 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/pull/0.log" Jan 27 08:20:01 crc kubenswrapper[4787]: I0127 08:20:01.202813 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/util/0.log" Jan 27 08:20:01 crc kubenswrapper[4787]: I0127 08:20:01.232726 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/pull/0.log" Jan 27 08:20:01 crc kubenswrapper[4787]: I0127 08:20:01.677676 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/pull/0.log" Jan 27 08:20:01 crc kubenswrapper[4787]: I0127 08:20:01.681569 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/util/0.log" Jan 27 08:20:01 crc kubenswrapper[4787]: I0127 08:20:01.743802 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fd3dfdf443b13861ec8513a19ccfc4147bdcd20f329cc4d382b7652cb7zp96g_f63e7c4f-d194-4209-a8ac-f67c9dc7dde1/extract/0.log" Jan 27 08:20:01 crc kubenswrapper[4787]: I0127 08:20:01.841841 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-kk6mj_be4be3e5-1589-4e84-9d35-f104bc3d5ad4/manager/0.log" Jan 27 08:20:01 crc kubenswrapper[4787]: I0127 08:20:01.933411 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-57zhq_f621b9f3-ead9-4fab-b33f-f9d6179e8f3f/manager/0.log" Jan 27 08:20:02 crc kubenswrapper[4787]: I0127 08:20:02.072526 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vkmnm_a21cc0b9-75d2-4ddb-925e-23eeaac2dd35/manager/0.log" Jan 27 08:20:02 crc kubenswrapper[4787]: I0127 08:20:02.268488 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-drtqw_c634491f-6069-472d-bff7-d8903d0afa1d/manager/0.log" Jan 27 08:20:02 crc kubenswrapper[4787]: I0127 08:20:02.405102 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-j7klk_ff52e5a2-2aab-451a-8869-72c1a506940a/manager/0.log" Jan 27 08:20:02 crc kubenswrapper[4787]: I0127 08:20:02.566610 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-2k8pd_32019c4d-61e3-4c27-86d4-3a79bb40ce70/manager/0.log" Jan 27 08:20:02 crc kubenswrapper[4787]: I0127 08:20:02.633989 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-tbr5m_59efc0ff-5727-48f9-91b7-36533ba5f94a/manager/0.log" Jan 27 08:20:02 crc kubenswrapper[4787]: I0127 08:20:02.795914 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-88gn4_e59e94f5-9033-488d-a186-476cb6cbb3f0/manager/0.log" Jan 27 08:20:02 crc kubenswrapper[4787]: I0127 08:20:02.909449 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-vk5vt_be114c8e-3aa0-41b4-9954-d07c800d3cfc/manager/0.log" Jan 27 08:20:03 crc kubenswrapper[4787]: I0127 08:20:03.045416 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57d6b69d8b-j9nxt_b0a80c53-be1f-444d-a49a-05c89daad297/manager/0.log" Jan 27 08:20:03 crc kubenswrapper[4787]: I0127 08:20:03.170690 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-2jn9v_804d50f1-7da5-4d0c-8c0e-cac4c5e5a244/registry-server/0.log" Jan 27 08:20:03 crc kubenswrapper[4787]: I0127 08:20:03.302032 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-x4nz5_601fe40b-5553-4225-b5bd-214428d6fa68/manager/0.log" Jan 27 08:20:03 crc kubenswrapper[4787]: I0127 08:20:03.380539 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854cf9xq_116b0532-a8d5-47ec-8e12-e7ef482094d6/manager/0.log" Jan 27 08:20:03 crc kubenswrapper[4787]: I0127 08:20:03.645740 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-c595dbb59-xc2xn_7a0ecc5c-eebe-457b-9a72-804226878c7f/manager/0.log" Jan 27 08:20:03 crc kubenswrapper[4787]: I0127 08:20:03.792764 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cnfx4_ba39dacd-d1da-4a2b-a4b9-fa0e917986f0/registry-server/0.log" Jan 27 08:20:03 crc kubenswrapper[4787]: I0127 08:20:03.899842 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-l9mc7_7cae616e-6aa8-405f-b10a-2a5346fae5b4/manager/0.log" Jan 27 08:20:03 crc kubenswrapper[4787]: I0127 08:20:03.983471 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qhnz4_a552e9e7-f9fa-4b71-9ec6-848a2230279f/manager/0.log" Jan 27 08:20:04 crc kubenswrapper[4787]: I0127 08:20:04.084682 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5lxsh_2cf3085a-51c4-42a7-a788-8e9dd9dbe4c7/operator/0.log" Jan 27 08:20:04 crc kubenswrapper[4787]: I0127 08:20:04.205175 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-pxbjw_89d7a7e7-443a-486f-b8b7-a024082b9fba/manager/0.log" Jan 27 08:20:04 crc kubenswrapper[4787]: I0127 08:20:04.309876 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-cqpjk_7986c3dd-6d06-4892-9fed-1b396669685b/manager/0.log" Jan 27 08:20:04 crc kubenswrapper[4787]: I0127 08:20:04.397288 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-dnxpj_7ec92388-f799-43cc-9235-6d4b717bc98e/manager/0.log" Jan 27 08:20:04 crc kubenswrapper[4787]: I0127 08:20:04.582809 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75db85654f-8cmb9_292c617a-b64f-4b53-b05e-360769308e43/manager/0.log" Jan 27 08:20:08 crc kubenswrapper[4787]: I0127 08:20:08.076294 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:20:08 crc kubenswrapper[4787]: E0127 08:20:08.078005 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:20:17 crc kubenswrapper[4787]: I0127 08:20:17.041432 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-sync-ztn8q"] Jan 27 08:20:17 crc kubenswrapper[4787]: I0127 08:20:17.047380 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-sync-ztn8q"] Jan 27 08:20:17 crc kubenswrapper[4787]: I0127 08:20:17.085204 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5da7883-eba0-4fa4-92c5-9085b0ce0090" path="/var/lib/kubelet/pods/c5da7883-eba0-4fa4-92c5-9085b0ce0090/volumes" Jan 27 08:20:19 crc kubenswrapper[4787]: I0127 08:20:19.076584 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:20:19 crc kubenswrapper[4787]: E0127 08:20:19.077166 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:20:25 crc kubenswrapper[4787]: I0127 08:20:25.542254 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wdvfw_c71976d9-f7ec-4258-8bf5-08a526607fd9/control-plane-machine-set-operator/0.log" Jan 27 08:20:25 crc kubenswrapper[4787]: I0127 08:20:25.709464 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qtkc8_3ef9e285-88a8-499d-8fb8-e4c882336e68/kube-rbac-proxy/0.log" Jan 27 08:20:25 crc kubenswrapper[4787]: I0127 08:20:25.725370 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qtkc8_3ef9e285-88a8-499d-8fb8-e4c882336e68/machine-api-operator/0.log" Jan 27 08:20:26 crc kubenswrapper[4787]: I0127 08:20:26.025238 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-sync-7x42z"] Jan 27 08:20:26 crc kubenswrapper[4787]: I0127 08:20:26.035221 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-sync-7x42z"] Jan 27 08:20:27 crc kubenswrapper[4787]: I0127 08:20:27.085222 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8" path="/var/lib/kubelet/pods/54e9c344-d7c0-4bcc-97f5-2ad0b8ee86d8/volumes" Jan 27 08:20:32 crc kubenswrapper[4787]: I0127 08:20:32.031455 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-bn5rd"] Jan 27 08:20:32 crc kubenswrapper[4787]: I0127 08:20:32.037242 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-bn5rd"] Jan 27 08:20:33 crc kubenswrapper[4787]: I0127 08:20:33.092244 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d885638-7008-41c1-85fc-43025469e404" path="/var/lib/kubelet/pods/6d885638-7008-41c1-85fc-43025469e404/volumes" Jan 27 08:20:34 crc kubenswrapper[4787]: I0127 08:20:34.076942 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:20:34 crc kubenswrapper[4787]: E0127 08:20:34.077308 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:20:36 crc kubenswrapper[4787]: I0127 08:20:36.024217 4787 scope.go:117] "RemoveContainer" containerID="3f06a186b769d4a1f5e238fd19184a35781ff17d23f2779b2c34abe34bf41b44" Jan 27 08:20:36 crc kubenswrapper[4787]: I0127 08:20:36.045685 4787 scope.go:117] "RemoveContainer" containerID="b5b33f88492ce6991e1e36a7024a76b4049def405dfe4dc15a1159b39d6f6633" Jan 27 08:20:36 crc kubenswrapper[4787]: I0127 08:20:36.088179 4787 scope.go:117] "RemoveContainer" containerID="9ab563345b762a72fbe3b47df8d23be9db6758acc4bcc324921200a4b4455e30" Jan 27 08:20:36 crc kubenswrapper[4787]: I0127 08:20:36.108118 4787 scope.go:117] "RemoveContainer" containerID="a478f74f2f9db06c59e7ba51e2be735e9b9fb9d652017f85e3a9daea1ac8536e" Jan 27 08:20:36 crc kubenswrapper[4787]: I0127 08:20:36.137586 4787 scope.go:117] "RemoveContainer" containerID="2e3468c57e9613d84bee598f4f0c5a4b7e44d892a408fe4d11cddeffe9d5b60b" Jan 27 08:20:36 crc kubenswrapper[4787]: I0127 08:20:36.169182 4787 scope.go:117] "RemoveContainer" containerID="dcf70ae1779bf3b7d74781efddb0114399b6ad2886db99535b6b73b54d7d6a8a" Jan 27 08:20:36 crc kubenswrapper[4787]: I0127 08:20:36.211193 4787 scope.go:117] "RemoveContainer" containerID="144343248d2b16217cf2dcb9392f4fccabd0a94e089b16a6bacae928d2fbb414" Jan 27 08:20:36 crc kubenswrapper[4787]: I0127 08:20:36.238771 4787 scope.go:117] "RemoveContainer" containerID="fdbb6282b854859c8ab712b9eae897a43f0552adf90b843b092e2da77b58ae00" Jan 27 08:20:38 crc kubenswrapper[4787]: I0127 08:20:38.678407 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-vbzqb_2219abac-7659-407a-b36f-cd0160e70c25/cert-manager-controller/0.log" Jan 27 08:20:38 crc kubenswrapper[4787]: I0127 08:20:38.902063 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-k4x6h_56feb43c-1e68-4770-94c0-e22943ca4313/cert-manager-cainjector/0.log" Jan 27 08:20:38 crc kubenswrapper[4787]: I0127 08:20:38.922183 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-lfhld_aff129cd-50c9-4f2a-b70d-d2066197a73b/cert-manager-webhook/0.log" Jan 27 08:20:49 crc kubenswrapper[4787]: I0127 08:20:49.078896 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:20:49 crc kubenswrapper[4787]: E0127 08:20:49.082642 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:20:52 crc kubenswrapper[4787]: I0127 08:20:52.192302 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7qxzb_51c3dc19-29a5-45ab-a384-fb7be3aa3f53/nmstate-console-plugin/0.log" Jan 27 08:20:52 crc kubenswrapper[4787]: I0127 08:20:52.334623 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fjcwr_0131f9ae-88a6-41c3-8c76-6b2676d5e87b/nmstate-handler/0.log" Jan 27 08:20:52 crc kubenswrapper[4787]: I0127 08:20:52.388246 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b8zkq_86b956d1-8553-4624-8324-f0a65d627e10/kube-rbac-proxy/0.log" Jan 27 08:20:52 crc kubenswrapper[4787]: I0127 08:20:52.569157 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b8zkq_86b956d1-8553-4624-8324-f0a65d627e10/nmstate-metrics/0.log" Jan 27 08:20:52 crc kubenswrapper[4787]: I0127 08:20:52.617069 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-zxwfx_d9fe5568-1516-4ee4-8ca1-90f4ea58ba3e/nmstate-operator/0.log" Jan 27 08:20:52 crc kubenswrapper[4787]: I0127 08:20:52.744199 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-n6mzk_29bc137f-2121-451f-9684-2b4209b38c32/nmstate-webhook/0.log" Jan 27 08:21:03 crc kubenswrapper[4787]: I0127 08:21:03.076465 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:21:03 crc kubenswrapper[4787]: E0127 08:21:03.077301 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:21:17 crc kubenswrapper[4787]: I0127 08:21:17.077540 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:21:17 crc kubenswrapper[4787]: E0127 08:21:17.078664 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:21:21 crc kubenswrapper[4787]: I0127 08:21:21.714733 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k24s7_f9b9a161-da5c-416b-9713-5fb85ee005fb/kube-rbac-proxy/0.log" Jan 27 08:21:21 crc kubenswrapper[4787]: I0127 08:21:21.832506 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k24s7_f9b9a161-da5c-416b-9713-5fb85ee005fb/controller/0.log" Jan 27 08:21:21 crc kubenswrapper[4787]: I0127 08:21:21.962668 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-frr-files/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.131718 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-frr-files/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.132100 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-reloader/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.132130 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-reloader/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.181040 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-metrics/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.365469 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-reloader/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.372053 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-metrics/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.398432 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-frr-files/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.408153 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-metrics/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.526355 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-frr-files/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.552582 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-reloader/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.571303 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/cp-metrics/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.608380 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/controller/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.754058 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/frr-metrics/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.793782 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/kube-rbac-proxy/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.829206 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/kube-rbac-proxy-frr/0.log" Jan 27 08:21:22 crc kubenswrapper[4787]: I0127 08:21:22.961875 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/reloader/0.log" Jan 27 08:21:23 crc kubenswrapper[4787]: I0127 08:21:23.005140 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pp6fl_49a597ca-3bfc-4377-a49b-19337d609659/frr/0.log" Jan 27 08:21:23 crc kubenswrapper[4787]: I0127 08:21:23.045169 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-vfl8q_360229c9-082c-4e85-8800-c5f8717fd8c4/frr-k8s-webhook-server/0.log" Jan 27 08:21:23 crc kubenswrapper[4787]: I0127 08:21:23.170023 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-555548cdf7-hwkjk_6d5bf055-d90c-451f-aaf7-19140fcea291/manager/0.log" Jan 27 08:21:23 crc kubenswrapper[4787]: I0127 08:21:23.261167 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-78544bb5fb-sxw5f_a5c49848-4aac-4efa-8acf-7edcee5c2093/webhook-server/0.log" Jan 27 08:21:23 crc kubenswrapper[4787]: I0127 08:21:23.375814 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-t2fkx_f5e61cdf-e660-42e7-b43c-42afb781223b/kube-rbac-proxy/0.log" Jan 27 08:21:23 crc kubenswrapper[4787]: I0127 08:21:23.566074 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-t2fkx_f5e61cdf-e660-42e7-b43c-42afb781223b/speaker/0.log" Jan 27 08:21:32 crc kubenswrapper[4787]: I0127 08:21:32.076767 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:21:32 crc kubenswrapper[4787]: E0127 08:21:32.077690 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:21:38 crc kubenswrapper[4787]: I0127 08:21:38.947437 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-678dff9ff-k6jwl_91494b32-1284-49fb-b548-3fbc5f1e1ddf/keystone-api/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.238614 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_841b999e-6e42-4f28-8fd8-334f69c3e3e6/mysql-bootstrap/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.257637 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_b0fc932b-0d21-426c-9b58-15c14ad9e95b/memcached/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.400050 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_841b999e-6e42-4f28-8fd8-334f69c3e3e6/mysql-bootstrap/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.447189 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_841b999e-6e42-4f28-8fd8-334f69c3e3e6/galera/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.510665 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_144491fe-49b9-4a76-8da2-db798cf6a1e4/mysql-bootstrap/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.677619 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_144491fe-49b9-4a76-8da2-db798cf6a1e4/mysql-bootstrap/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.689098 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_144491fe-49b9-4a76-8da2-db798cf6a1e4/galera/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.721698 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_4826cdba-006d-447f-a206-367bd1dc8893/openstackclient/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.901485 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-545ff75684-kqpgf_7d647302-7ff9-465d-b5a5-a3a76f35df28/placement-api/0.log" Jan 27 08:21:39 crc kubenswrapper[4787]: I0127 08:21:39.989099 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-545ff75684-kqpgf_7d647302-7ff9-465d-b5a5-a3a76f35df28/placement-log/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.094587 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_68be2504-1de8-427c-9937-bd0428f7c5c4/setup-container/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.276800 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_68be2504-1de8-427c-9937-bd0428f7c5c4/setup-container/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.332062 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_68be2504-1de8-427c-9937-bd0428f7c5c4/rabbitmq/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.380988 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_faff0a15-880a-4cf7-a0e0-81d573ace274/setup-container/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.535455 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_faff0a15-880a-4cf7-a0e0-81d573ace274/rabbitmq/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.564363 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_faff0a15-880a-4cf7-a0e0-81d573ace274/setup-container/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.617740 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_1d9a6f95-2510-4e74-b4f7-fb592d761c91/setup-container/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.787169 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_1d9a6f95-2510-4e74-b4f7-fb592d761c91/setup-container/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.834377 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-notifications-server-0_1d9a6f95-2510-4e74-b4f7-fb592d761c91/rabbitmq/0.log" Jan 27 08:21:40 crc kubenswrapper[4787]: I0127 08:21:40.834732 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_f84bfff4-b4f2-40d8-8b81-a9e5eb776442/setup-container/0.log" Jan 27 08:21:41 crc kubenswrapper[4787]: I0127 08:21:41.073664 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_f84bfff4-b4f2-40d8-8b81-a9e5eb776442/setup-container/0.log" Jan 27 08:21:41 crc kubenswrapper[4787]: I0127 08:21:41.076580 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_f84bfff4-b4f2-40d8-8b81-a9e5eb776442/rabbitmq/0.log" Jan 27 08:21:47 crc kubenswrapper[4787]: I0127 08:21:47.076783 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:21:47 crc kubenswrapper[4787]: E0127 08:21:47.077348 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:21:56 crc kubenswrapper[4787]: I0127 08:21:56.187348 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq_9ba6b24a-de23-4528-87fc-c2932e3beb6e/util/0.log" Jan 27 08:21:56 crc kubenswrapper[4787]: I0127 08:21:56.448261 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq_9ba6b24a-de23-4528-87fc-c2932e3beb6e/util/0.log" Jan 27 08:21:56 crc kubenswrapper[4787]: I0127 08:21:56.489358 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq_9ba6b24a-de23-4528-87fc-c2932e3beb6e/pull/0.log" Jan 27 08:21:56 crc kubenswrapper[4787]: I0127 08:21:56.574273 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq_9ba6b24a-de23-4528-87fc-c2932e3beb6e/pull/0.log" Jan 27 08:21:56 crc kubenswrapper[4787]: I0127 08:21:56.664168 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq_9ba6b24a-de23-4528-87fc-c2932e3beb6e/util/0.log" Jan 27 08:21:56 crc kubenswrapper[4787]: I0127 08:21:56.738902 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq_9ba6b24a-de23-4528-87fc-c2932e3beb6e/extract/0.log" Jan 27 08:21:56 crc kubenswrapper[4787]: I0127 08:21:56.779744 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931av6xrq_9ba6b24a-de23-4528-87fc-c2932e3beb6e/pull/0.log" Jan 27 08:21:56 crc kubenswrapper[4787]: I0127 08:21:56.858832 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77_db32e829-ff6c-4e31-bbc4-5291eb8127d3/util/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.079370 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77_db32e829-ff6c-4e31-bbc4-5291eb8127d3/pull/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.095200 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77_db32e829-ff6c-4e31-bbc4-5291eb8127d3/pull/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.103122 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77_db32e829-ff6c-4e31-bbc4-5291eb8127d3/util/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.290941 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77_db32e829-ff6c-4e31-bbc4-5291eb8127d3/pull/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.315988 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77_db32e829-ff6c-4e31-bbc4-5291eb8127d3/util/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.384935 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc44q77_db32e829-ff6c-4e31-bbc4-5291eb8127d3/extract/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.482253 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v_b105de51-099b-4b81-b99d-6aa63d8821ae/util/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.643651 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v_b105de51-099b-4b81-b99d-6aa63d8821ae/util/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.674791 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v_b105de51-099b-4b81-b99d-6aa63d8821ae/pull/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.675629 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v_b105de51-099b-4b81-b99d-6aa63d8821ae/pull/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.881904 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v_b105de51-099b-4b81-b99d-6aa63d8821ae/util/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.882681 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v_b105de51-099b-4b81-b99d-6aa63d8821ae/extract/0.log" Jan 27 08:21:57 crc kubenswrapper[4787]: I0127 08:21:57.919381 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71368s4v_b105de51-099b-4b81-b99d-6aa63d8821ae/pull/0.log" Jan 27 08:21:58 crc kubenswrapper[4787]: I0127 08:21:58.092103 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjssk_56eb4b72-a8dc-4300-882e-e29d83442af5/extract-utilities/0.log" Jan 27 08:21:58 crc kubenswrapper[4787]: I0127 08:21:58.292440 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjssk_56eb4b72-a8dc-4300-882e-e29d83442af5/extract-utilities/0.log" Jan 27 08:21:58 crc kubenswrapper[4787]: I0127 08:21:58.354355 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjssk_56eb4b72-a8dc-4300-882e-e29d83442af5/extract-content/0.log" Jan 27 08:21:58 crc kubenswrapper[4787]: I0127 08:21:58.381894 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjssk_56eb4b72-a8dc-4300-882e-e29d83442af5/extract-content/0.log" Jan 27 08:21:58 crc kubenswrapper[4787]: I0127 08:21:58.472903 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjssk_56eb4b72-a8dc-4300-882e-e29d83442af5/extract-utilities/0.log" Jan 27 08:21:58 crc kubenswrapper[4787]: I0127 08:21:58.604031 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjssk_56eb4b72-a8dc-4300-882e-e29d83442af5/extract-content/0.log" Jan 27 08:21:58 crc kubenswrapper[4787]: I0127 08:21:58.803727 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjx52_6953906d-8a35-4d3e-83a3-3a7451e834cc/extract-utilities/0.log" Jan 27 08:21:58 crc kubenswrapper[4787]: I0127 08:21:58.875449 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjssk_56eb4b72-a8dc-4300-882e-e29d83442af5/registry-server/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.034219 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjx52_6953906d-8a35-4d3e-83a3-3a7451e834cc/extract-content/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.052364 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjx52_6953906d-8a35-4d3e-83a3-3a7451e834cc/extract-content/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.070272 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjx52_6953906d-8a35-4d3e-83a3-3a7451e834cc/extract-utilities/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.080205 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:21:59 crc kubenswrapper[4787]: E0127 08:21:59.080426 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.414143 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjx52_6953906d-8a35-4d3e-83a3-3a7451e834cc/extract-content/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.496729 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjx52_6953906d-8a35-4d3e-83a3-3a7451e834cc/extract-utilities/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.679230 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fkd29_39675fef-cac4-48c9-bd77-e0ee695a5ab8/marketplace-operator/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.716123 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjx52_6953906d-8a35-4d3e-83a3-3a7451e834cc/registry-server/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.842420 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdpsb_b6daca56-9e67-41d1-80da-c213717daead/extract-utilities/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.933483 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdpsb_b6daca56-9e67-41d1-80da-c213717daead/extract-utilities/0.log" Jan 27 08:21:59 crc kubenswrapper[4787]: I0127 08:21:59.998569 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdpsb_b6daca56-9e67-41d1-80da-c213717daead/extract-content/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.139009 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdpsb_b6daca56-9e67-41d1-80da-c213717daead/extract-content/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.275948 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdpsb_b6daca56-9e67-41d1-80da-c213717daead/extract-content/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.324474 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdpsb_b6daca56-9e67-41d1-80da-c213717daead/extract-utilities/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.400733 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdpsb_b6daca56-9e67-41d1-80da-c213717daead/registry-server/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.413347 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8bz6z_3c2c8464-ea72-46ca-a23d-ae23e6617ce8/extract-utilities/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.582703 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8bz6z_3c2c8464-ea72-46ca-a23d-ae23e6617ce8/extract-content/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.598802 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8bz6z_3c2c8464-ea72-46ca-a23d-ae23e6617ce8/extract-utilities/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.604590 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8bz6z_3c2c8464-ea72-46ca-a23d-ae23e6617ce8/extract-content/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.811622 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8bz6z_3c2c8464-ea72-46ca-a23d-ae23e6617ce8/extract-utilities/0.log" Jan 27 08:22:00 crc kubenswrapper[4787]: I0127 08:22:00.827615 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8bz6z_3c2c8464-ea72-46ca-a23d-ae23e6617ce8/extract-content/0.log" Jan 27 08:22:01 crc kubenswrapper[4787]: I0127 08:22:01.077032 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8bz6z_3c2c8464-ea72-46ca-a23d-ae23e6617ce8/registry-server/0.log" Jan 27 08:22:10 crc kubenswrapper[4787]: I0127 08:22:10.076280 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:22:10 crc kubenswrapper[4787]: E0127 08:22:10.076997 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:22:22 crc kubenswrapper[4787]: I0127 08:22:22.076696 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:22:22 crc kubenswrapper[4787]: E0127 08:22:22.077464 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:22:37 crc kubenswrapper[4787]: I0127 08:22:37.076759 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:22:37 crc kubenswrapper[4787]: E0127 08:22:37.077682 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:22:51 crc kubenswrapper[4787]: I0127 08:22:51.077043 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:22:51 crc kubenswrapper[4787]: E0127 08:22:51.077894 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:23:06 crc kubenswrapper[4787]: I0127 08:23:06.077132 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:23:06 crc kubenswrapper[4787]: E0127 08:23:06.078040 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:23:18 crc kubenswrapper[4787]: I0127 08:23:18.682414 4787 generic.go:334] "Generic (PLEG): container finished" podID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerID="f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156" exitCode=0 Jan 27 08:23:18 crc kubenswrapper[4787]: I0127 08:23:18.682514 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zjjpz/must-gather-hldr6" event={"ID":"8f8dfa07-27b1-450f-9019-56d0ced6d238","Type":"ContainerDied","Data":"f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156"} Jan 27 08:23:18 crc kubenswrapper[4787]: I0127 08:23:18.683543 4787 scope.go:117] "RemoveContainer" containerID="f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156" Jan 27 08:23:19 crc kubenswrapper[4787]: I0127 08:23:19.372793 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zjjpz_must-gather-hldr6_8f8dfa07-27b1-450f-9019-56d0ced6d238/gather/0.log" Jan 27 08:23:21 crc kubenswrapper[4787]: I0127 08:23:21.076929 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:23:21 crc kubenswrapper[4787]: E0127 08:23:21.077573 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:23:26 crc kubenswrapper[4787]: I0127 08:23:26.624267 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zjjpz/must-gather-hldr6"] Jan 27 08:23:26 crc kubenswrapper[4787]: I0127 08:23:26.625206 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zjjpz/must-gather-hldr6" podUID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerName="copy" containerID="cri-o://387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380" gracePeriod=2 Jan 27 08:23:26 crc kubenswrapper[4787]: I0127 08:23:26.631101 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zjjpz/must-gather-hldr6"] Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.048297 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zjjpz_must-gather-hldr6_8f8dfa07-27b1-450f-9019-56d0ced6d238/copy/0.log" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.049184 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.164509 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrwnk\" (UniqueName: \"kubernetes.io/projected/8f8dfa07-27b1-450f-9019-56d0ced6d238-kube-api-access-lrwnk\") pod \"8f8dfa07-27b1-450f-9019-56d0ced6d238\" (UID: \"8f8dfa07-27b1-450f-9019-56d0ced6d238\") " Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.164576 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f8dfa07-27b1-450f-9019-56d0ced6d238-must-gather-output\") pod \"8f8dfa07-27b1-450f-9019-56d0ced6d238\" (UID: \"8f8dfa07-27b1-450f-9019-56d0ced6d238\") " Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.169962 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f8dfa07-27b1-450f-9019-56d0ced6d238-kube-api-access-lrwnk" (OuterVolumeSpecName: "kube-api-access-lrwnk") pod "8f8dfa07-27b1-450f-9019-56d0ced6d238" (UID: "8f8dfa07-27b1-450f-9019-56d0ced6d238"). InnerVolumeSpecName "kube-api-access-lrwnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.257934 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f8dfa07-27b1-450f-9019-56d0ced6d238-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8f8dfa07-27b1-450f-9019-56d0ced6d238" (UID: "8f8dfa07-27b1-450f-9019-56d0ced6d238"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.266953 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrwnk\" (UniqueName: \"kubernetes.io/projected/8f8dfa07-27b1-450f-9019-56d0ced6d238-kube-api-access-lrwnk\") on node \"crc\" DevicePath \"\"" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.266986 4787 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8f8dfa07-27b1-450f-9019-56d0ced6d238-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.747605 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zjjpz_must-gather-hldr6_8f8dfa07-27b1-450f-9019-56d0ced6d238/copy/0.log" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.748474 4787 generic.go:334] "Generic (PLEG): container finished" podID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerID="387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380" exitCode=143 Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.748538 4787 scope.go:117] "RemoveContainer" containerID="387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.748582 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zjjpz/must-gather-hldr6" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.787766 4787 scope.go:117] "RemoveContainer" containerID="f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.839125 4787 scope.go:117] "RemoveContainer" containerID="387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380" Jan 27 08:23:27 crc kubenswrapper[4787]: E0127 08:23:27.839630 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380\": container with ID starting with 387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380 not found: ID does not exist" containerID="387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.839656 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380"} err="failed to get container status \"387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380\": rpc error: code = NotFound desc = could not find container \"387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380\": container with ID starting with 387fee2205dae8bd6763255730dec63813d6d41dc01c06d500efe409bc6e4380 not found: ID does not exist" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.839678 4787 scope.go:117] "RemoveContainer" containerID="f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156" Jan 27 08:23:27 crc kubenswrapper[4787]: E0127 08:23:27.839921 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156\": container with ID starting with f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156 not found: ID does not exist" containerID="f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156" Jan 27 08:23:27 crc kubenswrapper[4787]: I0127 08:23:27.839940 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156"} err="failed to get container status \"f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156\": rpc error: code = NotFound desc = could not find container \"f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156\": container with ID starting with f30e73fea6424d4233a5355b7aa6b2d90d5724e1970661129e89efcc148da156 not found: ID does not exist" Jan 27 08:23:29 crc kubenswrapper[4787]: I0127 08:23:29.085360 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f8dfa07-27b1-450f-9019-56d0ced6d238" path="/var/lib/kubelet/pods/8f8dfa07-27b1-450f-9019-56d0ced6d238/volumes" Jan 27 08:23:33 crc kubenswrapper[4787]: I0127 08:23:33.077110 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:23:33 crc kubenswrapper[4787]: E0127 08:23:33.077860 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:23:48 crc kubenswrapper[4787]: I0127 08:23:48.081048 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:23:48 crc kubenswrapper[4787]: E0127 08:23:48.082150 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:23:59 crc kubenswrapper[4787]: I0127 08:23:59.077370 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:23:59 crc kubenswrapper[4787]: E0127 08:23:59.078282 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:24:13 crc kubenswrapper[4787]: I0127 08:24:13.076685 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:24:13 crc kubenswrapper[4787]: E0127 08:24:13.077646 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.573356 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5h5v6"] Jan 27 08:24:25 crc kubenswrapper[4787]: E0127 08:24:25.574307 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerName="copy" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.574326 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerName="copy" Jan 27 08:24:25 crc kubenswrapper[4787]: E0127 08:24:25.574348 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerName="gather" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.574357 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerName="gather" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.574566 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerName="gather" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.574597 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8dfa07-27b1-450f-9019-56d0ced6d238" containerName="copy" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.575663 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.597957 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5h5v6"] Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.643458 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-catalog-content\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.643512 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhvk\" (UniqueName: \"kubernetes.io/projected/5d52ceef-4485-4ef8-a928-e674645895d7-kube-api-access-cwhvk\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.643578 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-utilities\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.745424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-catalog-content\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.745826 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwhvk\" (UniqueName: \"kubernetes.io/projected/5d52ceef-4485-4ef8-a928-e674645895d7-kube-api-access-cwhvk\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.745866 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-utilities\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.746074 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-catalog-content\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.746122 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-utilities\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.770635 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwhvk\" (UniqueName: \"kubernetes.io/projected/5d52ceef-4485-4ef8-a928-e674645895d7-kube-api-access-cwhvk\") pod \"redhat-operators-5h5v6\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:25 crc kubenswrapper[4787]: I0127 08:24:25.903913 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:26 crc kubenswrapper[4787]: I0127 08:24:26.076108 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:24:26 crc kubenswrapper[4787]: E0127 08:24:26.076717 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:24:26 crc kubenswrapper[4787]: I0127 08:24:26.393136 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5h5v6"] Jan 27 08:24:27 crc kubenswrapper[4787]: I0127 08:24:27.187578 4787 generic.go:334] "Generic (PLEG): container finished" podID="5d52ceef-4485-4ef8-a928-e674645895d7" containerID="1dad88948101345937304f02d97549de9181a706153003bd851bda10eadc04f8" exitCode=0 Jan 27 08:24:27 crc kubenswrapper[4787]: I0127 08:24:27.187608 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h5v6" event={"ID":"5d52ceef-4485-4ef8-a928-e674645895d7","Type":"ContainerDied","Data":"1dad88948101345937304f02d97549de9181a706153003bd851bda10eadc04f8"} Jan 27 08:24:27 crc kubenswrapper[4787]: I0127 08:24:27.187917 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h5v6" event={"ID":"5d52ceef-4485-4ef8-a928-e674645895d7","Type":"ContainerStarted","Data":"4eac58c9bc7e715496982aafc25fdbce3075831ece2d9aae7f17ee70e9365f45"} Jan 27 08:24:27 crc kubenswrapper[4787]: I0127 08:24:27.188968 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:24:28 crc kubenswrapper[4787]: I0127 08:24:28.194874 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h5v6" event={"ID":"5d52ceef-4485-4ef8-a928-e674645895d7","Type":"ContainerStarted","Data":"8891d4a222ee724f541e25eed3f2d449b6f1f186a201bb270d2404d4a73ed22f"} Jan 27 08:24:29 crc kubenswrapper[4787]: I0127 08:24:29.204598 4787 generic.go:334] "Generic (PLEG): container finished" podID="5d52ceef-4485-4ef8-a928-e674645895d7" containerID="8891d4a222ee724f541e25eed3f2d449b6f1f186a201bb270d2404d4a73ed22f" exitCode=0 Jan 27 08:24:29 crc kubenswrapper[4787]: I0127 08:24:29.204672 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h5v6" event={"ID":"5d52ceef-4485-4ef8-a928-e674645895d7","Type":"ContainerDied","Data":"8891d4a222ee724f541e25eed3f2d449b6f1f186a201bb270d2404d4a73ed22f"} Jan 27 08:24:30 crc kubenswrapper[4787]: I0127 08:24:30.214394 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h5v6" event={"ID":"5d52ceef-4485-4ef8-a928-e674645895d7","Type":"ContainerStarted","Data":"cfec861a461574c11bd800dda12c846a44c5492289758881603832f829ad865b"} Jan 27 08:24:30 crc kubenswrapper[4787]: I0127 08:24:30.232508 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5h5v6" podStartSLOduration=2.791434947 podStartE2EDuration="5.232485844s" podCreationTimestamp="2026-01-27 08:24:25 +0000 UTC" firstStartedPulling="2026-01-27 08:24:27.188790958 +0000 UTC m=+1972.841146450" lastFinishedPulling="2026-01-27 08:24:29.629841855 +0000 UTC m=+1975.282197347" observedRunningTime="2026-01-27 08:24:30.231405239 +0000 UTC m=+1975.883760741" watchObservedRunningTime="2026-01-27 08:24:30.232485844 +0000 UTC m=+1975.884841366" Jan 27 08:24:35 crc kubenswrapper[4787]: I0127 08:24:35.904827 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:35 crc kubenswrapper[4787]: I0127 08:24:35.905567 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:35 crc kubenswrapper[4787]: I0127 08:24:35.944089 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:36 crc kubenswrapper[4787]: I0127 08:24:36.295844 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:36 crc kubenswrapper[4787]: I0127 08:24:36.349267 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5h5v6"] Jan 27 08:24:38 crc kubenswrapper[4787]: I0127 08:24:38.268719 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5h5v6" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" containerName="registry-server" containerID="cri-o://cfec861a461574c11bd800dda12c846a44c5492289758881603832f829ad865b" gracePeriod=2 Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.076967 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:24:40 crc kubenswrapper[4787]: E0127 08:24:40.077519 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4fh5_openshift-machine-config-operator(f051e184-acac-47cf-9e04-9df648288715)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.296322 4787 generic.go:334] "Generic (PLEG): container finished" podID="5d52ceef-4485-4ef8-a928-e674645895d7" containerID="cfec861a461574c11bd800dda12c846a44c5492289758881603832f829ad865b" exitCode=0 Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.296380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h5v6" event={"ID":"5d52ceef-4485-4ef8-a928-e674645895d7","Type":"ContainerDied","Data":"cfec861a461574c11bd800dda12c846a44c5492289758881603832f829ad865b"} Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.649951 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.658466 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-utilities\") pod \"5d52ceef-4485-4ef8-a928-e674645895d7\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.658514 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwhvk\" (UniqueName: \"kubernetes.io/projected/5d52ceef-4485-4ef8-a928-e674645895d7-kube-api-access-cwhvk\") pod \"5d52ceef-4485-4ef8-a928-e674645895d7\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.658659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-catalog-content\") pod \"5d52ceef-4485-4ef8-a928-e674645895d7\" (UID: \"5d52ceef-4485-4ef8-a928-e674645895d7\") " Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.659275 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-utilities" (OuterVolumeSpecName: "utilities") pod "5d52ceef-4485-4ef8-a928-e674645895d7" (UID: "5d52ceef-4485-4ef8-a928-e674645895d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.669816 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d52ceef-4485-4ef8-a928-e674645895d7-kube-api-access-cwhvk" (OuterVolumeSpecName: "kube-api-access-cwhvk") pod "5d52ceef-4485-4ef8-a928-e674645895d7" (UID: "5d52ceef-4485-4ef8-a928-e674645895d7"). InnerVolumeSpecName "kube-api-access-cwhvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.760140 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.760365 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwhvk\" (UniqueName: \"kubernetes.io/projected/5d52ceef-4485-4ef8-a928-e674645895d7-kube-api-access-cwhvk\") on node \"crc\" DevicePath \"\"" Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.775138 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d52ceef-4485-4ef8-a928-e674645895d7" (UID: "5d52ceef-4485-4ef8-a928-e674645895d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:24:40 crc kubenswrapper[4787]: I0127 08:24:40.861725 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d52ceef-4485-4ef8-a928-e674645895d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:24:41 crc kubenswrapper[4787]: I0127 08:24:41.304282 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h5v6" event={"ID":"5d52ceef-4485-4ef8-a928-e674645895d7","Type":"ContainerDied","Data":"4eac58c9bc7e715496982aafc25fdbce3075831ece2d9aae7f17ee70e9365f45"} Jan 27 08:24:41 crc kubenswrapper[4787]: I0127 08:24:41.304572 4787 scope.go:117] "RemoveContainer" containerID="cfec861a461574c11bd800dda12c846a44c5492289758881603832f829ad865b" Jan 27 08:24:41 crc kubenswrapper[4787]: I0127 08:24:41.304616 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h5v6" Jan 27 08:24:41 crc kubenswrapper[4787]: I0127 08:24:41.335773 4787 scope.go:117] "RemoveContainer" containerID="8891d4a222ee724f541e25eed3f2d449b6f1f186a201bb270d2404d4a73ed22f" Jan 27 08:24:41 crc kubenswrapper[4787]: I0127 08:24:41.344639 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5h5v6"] Jan 27 08:24:41 crc kubenswrapper[4787]: I0127 08:24:41.346311 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5h5v6"] Jan 27 08:24:41 crc kubenswrapper[4787]: I0127 08:24:41.409859 4787 scope.go:117] "RemoveContainer" containerID="1dad88948101345937304f02d97549de9181a706153003bd851bda10eadc04f8" Jan 27 08:24:43 crc kubenswrapper[4787]: I0127 08:24:43.085298 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" path="/var/lib/kubelet/pods/5d52ceef-4485-4ef8-a928-e674645895d7/volumes" Jan 27 08:24:55 crc kubenswrapper[4787]: I0127 08:24:55.080908 4787 scope.go:117] "RemoveContainer" containerID="125d3b0a12c5355f78fe53f0c5060ee333e1de0ab42bf300e460906f195efd1e" Jan 27 08:24:55 crc kubenswrapper[4787]: I0127 08:24:55.417922 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" event={"ID":"f051e184-acac-47cf-9e04-9df648288715","Type":"ContainerStarted","Data":"fddd18f1ff4cbc977bd28c15a1789e1044c68d26be8f818f7c210dbd846d2077"} Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.210332 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcgw7"] Jan 27 08:26:08 crc kubenswrapper[4787]: E0127 08:26:08.211575 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" containerName="registry-server" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.211591 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" containerName="registry-server" Jan 27 08:26:08 crc kubenswrapper[4787]: E0127 08:26:08.211633 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" containerName="extract-utilities" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.211640 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" containerName="extract-utilities" Jan 27 08:26:08 crc kubenswrapper[4787]: E0127 08:26:08.211655 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" containerName="extract-content" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.211661 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" containerName="extract-content" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.211999 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d52ceef-4485-4ef8-a928-e674645895d7" containerName="registry-server" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.225998 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.257282 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcgw7"] Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.344781 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdd49\" (UniqueName: \"kubernetes.io/projected/a2265c67-504d-465a-b80a-c715b4202cf3-kube-api-access-bdd49\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.344925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-catalog-content\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.344958 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-utilities\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.446198 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdd49\" (UniqueName: \"kubernetes.io/projected/a2265c67-504d-465a-b80a-c715b4202cf3-kube-api-access-bdd49\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.446293 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-catalog-content\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.446320 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-utilities\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.446864 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-utilities\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.447118 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-catalog-content\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.482414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdd49\" (UniqueName: \"kubernetes.io/projected/a2265c67-504d-465a-b80a-c715b4202cf3-kube-api-access-bdd49\") pod \"certified-operators-vcgw7\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.568130 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.847634 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcgw7"] Jan 27 08:26:08 crc kubenswrapper[4787]: I0127 08:26:08.931373 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcgw7" event={"ID":"a2265c67-504d-465a-b80a-c715b4202cf3","Type":"ContainerStarted","Data":"060b3b3e5c20939a0931c5b1b0fcae788e01b084a190e6d6e512693b8e45f483"} Jan 27 08:26:09 crc kubenswrapper[4787]: I0127 08:26:09.940300 4787 generic.go:334] "Generic (PLEG): container finished" podID="a2265c67-504d-465a-b80a-c715b4202cf3" containerID="1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400" exitCode=0 Jan 27 08:26:09 crc kubenswrapper[4787]: I0127 08:26:09.940421 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcgw7" event={"ID":"a2265c67-504d-465a-b80a-c715b4202cf3","Type":"ContainerDied","Data":"1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400"} Jan 27 08:26:10 crc kubenswrapper[4787]: I0127 08:26:10.949644 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcgw7" event={"ID":"a2265c67-504d-465a-b80a-c715b4202cf3","Type":"ContainerStarted","Data":"779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1"} Jan 27 08:26:11 crc kubenswrapper[4787]: I0127 08:26:11.958386 4787 generic.go:334] "Generic (PLEG): container finished" podID="a2265c67-504d-465a-b80a-c715b4202cf3" containerID="779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1" exitCode=0 Jan 27 08:26:11 crc kubenswrapper[4787]: I0127 08:26:11.958445 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcgw7" event={"ID":"a2265c67-504d-465a-b80a-c715b4202cf3","Type":"ContainerDied","Data":"779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1"} Jan 27 08:26:11 crc kubenswrapper[4787]: I0127 08:26:11.958705 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcgw7" event={"ID":"a2265c67-504d-465a-b80a-c715b4202cf3","Type":"ContainerStarted","Data":"9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3"} Jan 27 08:26:11 crc kubenswrapper[4787]: I0127 08:26:11.978973 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcgw7" podStartSLOduration=2.567045157 podStartE2EDuration="3.978954605s" podCreationTimestamp="2026-01-27 08:26:08 +0000 UTC" firstStartedPulling="2026-01-27 08:26:09.942818485 +0000 UTC m=+2075.595173977" lastFinishedPulling="2026-01-27 08:26:11.354727933 +0000 UTC m=+2077.007083425" observedRunningTime="2026-01-27 08:26:11.972403714 +0000 UTC m=+2077.624759216" watchObservedRunningTime="2026-01-27 08:26:11.978954605 +0000 UTC m=+2077.631310087" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.574483 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpnl"] Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.576540 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.585641 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpnl"] Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.720485 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-catalog-content\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.720586 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-utilities\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.720618 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpm9\" (UniqueName: \"kubernetes.io/projected/04443910-6fb3-4e50-983a-c00b04645e7c-kube-api-access-7hpm9\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.822651 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-catalog-content\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.822707 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-utilities\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.822738 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hpm9\" (UniqueName: \"kubernetes.io/projected/04443910-6fb3-4e50-983a-c00b04645e7c-kube-api-access-7hpm9\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.823138 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-catalog-content\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.823293 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-utilities\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.845500 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hpm9\" (UniqueName: \"kubernetes.io/projected/04443910-6fb3-4e50-983a-c00b04645e7c-kube-api-access-7hpm9\") pod \"redhat-marketplace-xmpnl\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:12 crc kubenswrapper[4787]: I0127 08:26:12.901893 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:13 crc kubenswrapper[4787]: I0127 08:26:13.516118 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpnl"] Jan 27 08:26:13 crc kubenswrapper[4787]: I0127 08:26:13.973286 4787 generic.go:334] "Generic (PLEG): container finished" podID="04443910-6fb3-4e50-983a-c00b04645e7c" containerID="55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24" exitCode=0 Jan 27 08:26:13 crc kubenswrapper[4787]: I0127 08:26:13.973462 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpnl" event={"ID":"04443910-6fb3-4e50-983a-c00b04645e7c","Type":"ContainerDied","Data":"55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24"} Jan 27 08:26:13 crc kubenswrapper[4787]: I0127 08:26:13.973614 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpnl" event={"ID":"04443910-6fb3-4e50-983a-c00b04645e7c","Type":"ContainerStarted","Data":"be45119f291618d3fc60777d9abdcb3f2ce93d4c6cc9c290c81c684e453691e1"} Jan 27 08:26:14 crc kubenswrapper[4787]: I0127 08:26:14.981977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpnl" event={"ID":"04443910-6fb3-4e50-983a-c00b04645e7c","Type":"ContainerStarted","Data":"6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d"} Jan 27 08:26:16 crc kubenswrapper[4787]: I0127 08:26:16.007003 4787 generic.go:334] "Generic (PLEG): container finished" podID="04443910-6fb3-4e50-983a-c00b04645e7c" containerID="6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d" exitCode=0 Jan 27 08:26:16 crc kubenswrapper[4787]: I0127 08:26:16.007060 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpnl" event={"ID":"04443910-6fb3-4e50-983a-c00b04645e7c","Type":"ContainerDied","Data":"6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d"} Jan 27 08:26:17 crc kubenswrapper[4787]: I0127 08:26:17.016393 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpnl" event={"ID":"04443910-6fb3-4e50-983a-c00b04645e7c","Type":"ContainerStarted","Data":"a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6"} Jan 27 08:26:17 crc kubenswrapper[4787]: I0127 08:26:17.035698 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xmpnl" podStartSLOduration=2.582285609 podStartE2EDuration="5.035681013s" podCreationTimestamp="2026-01-27 08:26:12 +0000 UTC" firstStartedPulling="2026-01-27 08:26:13.975345159 +0000 UTC m=+2079.627700651" lastFinishedPulling="2026-01-27 08:26:16.428740563 +0000 UTC m=+2082.081096055" observedRunningTime="2026-01-27 08:26:17.033989412 +0000 UTC m=+2082.686344904" watchObservedRunningTime="2026-01-27 08:26:17.035681013 +0000 UTC m=+2082.688036505" Jan 27 08:26:18 crc kubenswrapper[4787]: I0127 08:26:18.569009 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:18 crc kubenswrapper[4787]: I0127 08:26:18.569388 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:18 crc kubenswrapper[4787]: I0127 08:26:18.608905 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:19 crc kubenswrapper[4787]: I0127 08:26:19.166918 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:19 crc kubenswrapper[4787]: I0127 08:26:19.760651 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcgw7"] Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.041403 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vcgw7" podUID="a2265c67-504d-465a-b80a-c715b4202cf3" containerName="registry-server" containerID="cri-o://9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3" gracePeriod=2 Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.528128 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.665676 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdd49\" (UniqueName: \"kubernetes.io/projected/a2265c67-504d-465a-b80a-c715b4202cf3-kube-api-access-bdd49\") pod \"a2265c67-504d-465a-b80a-c715b4202cf3\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.665842 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-catalog-content\") pod \"a2265c67-504d-465a-b80a-c715b4202cf3\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.665882 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-utilities\") pod \"a2265c67-504d-465a-b80a-c715b4202cf3\" (UID: \"a2265c67-504d-465a-b80a-c715b4202cf3\") " Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.666658 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-utilities" (OuterVolumeSpecName: "utilities") pod "a2265c67-504d-465a-b80a-c715b4202cf3" (UID: "a2265c67-504d-465a-b80a-c715b4202cf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.671783 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2265c67-504d-465a-b80a-c715b4202cf3-kube-api-access-bdd49" (OuterVolumeSpecName: "kube-api-access-bdd49") pod "a2265c67-504d-465a-b80a-c715b4202cf3" (UID: "a2265c67-504d-465a-b80a-c715b4202cf3"). InnerVolumeSpecName "kube-api-access-bdd49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.768264 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.768306 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdd49\" (UniqueName: \"kubernetes.io/projected/a2265c67-504d-465a-b80a-c715b4202cf3-kube-api-access-bdd49\") on node \"crc\" DevicePath \"\"" Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.800699 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2265c67-504d-465a-b80a-c715b4202cf3" (UID: "a2265c67-504d-465a-b80a-c715b4202cf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:26:21 crc kubenswrapper[4787]: I0127 08:26:21.869684 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2265c67-504d-465a-b80a-c715b4202cf3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.051995 4787 generic.go:334] "Generic (PLEG): container finished" podID="a2265c67-504d-465a-b80a-c715b4202cf3" containerID="9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3" exitCode=0 Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.052039 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcgw7" event={"ID":"a2265c67-504d-465a-b80a-c715b4202cf3","Type":"ContainerDied","Data":"9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3"} Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.052069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcgw7" event={"ID":"a2265c67-504d-465a-b80a-c715b4202cf3","Type":"ContainerDied","Data":"060b3b3e5c20939a0931c5b1b0fcae788e01b084a190e6d6e512693b8e45f483"} Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.052071 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcgw7" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.052089 4787 scope.go:117] "RemoveContainer" containerID="9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.083253 4787 scope.go:117] "RemoveContainer" containerID="779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.104321 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcgw7"] Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.111174 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vcgw7"] Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.124990 4787 scope.go:117] "RemoveContainer" containerID="1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.143722 4787 scope.go:117] "RemoveContainer" containerID="9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3" Jan 27 08:26:22 crc kubenswrapper[4787]: E0127 08:26:22.144151 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3\": container with ID starting with 9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3 not found: ID does not exist" containerID="9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.144200 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3"} err="failed to get container status \"9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3\": rpc error: code = NotFound desc = could not find container \"9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3\": container with ID starting with 9862f0422b82447b0dfc99e64d9a10c52598d6775ee4c921dbb857ea6d3b94f3 not found: ID does not exist" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.144233 4787 scope.go:117] "RemoveContainer" containerID="779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1" Jan 27 08:26:22 crc kubenswrapper[4787]: E0127 08:26:22.144570 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1\": container with ID starting with 779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1 not found: ID does not exist" containerID="779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.144626 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1"} err="failed to get container status \"779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1\": rpc error: code = NotFound desc = could not find container \"779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1\": container with ID starting with 779b69bc3b17304f4e79421bac24efff90f485ec9ae8f9b81420efd9123664f1 not found: ID does not exist" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.144661 4787 scope.go:117] "RemoveContainer" containerID="1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400" Jan 27 08:26:22 crc kubenswrapper[4787]: E0127 08:26:22.145215 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400\": container with ID starting with 1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400 not found: ID does not exist" containerID="1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.145247 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400"} err="failed to get container status \"1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400\": rpc error: code = NotFound desc = could not find container \"1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400\": container with ID starting with 1069293c5c0e9f74935618894be14b24cf62f8b9c2e8c906fa00daeb2cea9400 not found: ID does not exist" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.902033 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.902442 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:22 crc kubenswrapper[4787]: I0127 08:26:22.981919 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:23 crc kubenswrapper[4787]: I0127 08:26:23.085311 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2265c67-504d-465a-b80a-c715b4202cf3" path="/var/lib/kubelet/pods/a2265c67-504d-465a-b80a-c715b4202cf3/volumes" Jan 27 08:26:23 crc kubenswrapper[4787]: I0127 08:26:23.111587 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.166342 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpnl"] Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.166882 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xmpnl" podUID="04443910-6fb3-4e50-983a-c00b04645e7c" containerName="registry-server" containerID="cri-o://a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6" gracePeriod=2 Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.578726 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.762136 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hpm9\" (UniqueName: \"kubernetes.io/projected/04443910-6fb3-4e50-983a-c00b04645e7c-kube-api-access-7hpm9\") pod \"04443910-6fb3-4e50-983a-c00b04645e7c\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.762253 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-catalog-content\") pod \"04443910-6fb3-4e50-983a-c00b04645e7c\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.762387 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-utilities\") pod \"04443910-6fb3-4e50-983a-c00b04645e7c\" (UID: \"04443910-6fb3-4e50-983a-c00b04645e7c\") " Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.763375 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-utilities" (OuterVolumeSpecName: "utilities") pod "04443910-6fb3-4e50-983a-c00b04645e7c" (UID: "04443910-6fb3-4e50-983a-c00b04645e7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.773173 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04443910-6fb3-4e50-983a-c00b04645e7c-kube-api-access-7hpm9" (OuterVolumeSpecName: "kube-api-access-7hpm9") pod "04443910-6fb3-4e50-983a-c00b04645e7c" (UID: "04443910-6fb3-4e50-983a-c00b04645e7c"). InnerVolumeSpecName "kube-api-access-7hpm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.784300 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04443910-6fb3-4e50-983a-c00b04645e7c" (UID: "04443910-6fb3-4e50-983a-c00b04645e7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.864062 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.864094 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hpm9\" (UniqueName: \"kubernetes.io/projected/04443910-6fb3-4e50-983a-c00b04645e7c-kube-api-access-7hpm9\") on node \"crc\" DevicePath \"\"" Jan 27 08:26:25 crc kubenswrapper[4787]: I0127 08:26:25.864107 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04443910-6fb3-4e50-983a-c00b04645e7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.089136 4787 generic.go:334] "Generic (PLEG): container finished" podID="04443910-6fb3-4e50-983a-c00b04645e7c" containerID="a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6" exitCode=0 Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.089205 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpnl" event={"ID":"04443910-6fb3-4e50-983a-c00b04645e7c","Type":"ContainerDied","Data":"a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6"} Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.089245 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpnl" event={"ID":"04443910-6fb3-4e50-983a-c00b04645e7c","Type":"ContainerDied","Data":"be45119f291618d3fc60777d9abdcb3f2ce93d4c6cc9c290c81c684e453691e1"} Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.089270 4787 scope.go:117] "RemoveContainer" containerID="a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.089454 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmpnl" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.126046 4787 scope.go:117] "RemoveContainer" containerID="6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.128799 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpnl"] Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.134580 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpnl"] Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.159452 4787 scope.go:117] "RemoveContainer" containerID="55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.189909 4787 scope.go:117] "RemoveContainer" containerID="a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6" Jan 27 08:26:26 crc kubenswrapper[4787]: E0127 08:26:26.190418 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6\": container with ID starting with a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6 not found: ID does not exist" containerID="a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.190458 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6"} err="failed to get container status \"a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6\": rpc error: code = NotFound desc = could not find container \"a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6\": container with ID starting with a625f9530d4218b230ba2f19e5882c1cc1614e48a600741d4e3683a1b271aaa6 not found: ID does not exist" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.190485 4787 scope.go:117] "RemoveContainer" containerID="6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d" Jan 27 08:26:26 crc kubenswrapper[4787]: E0127 08:26:26.190988 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d\": container with ID starting with 6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d not found: ID does not exist" containerID="6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.191016 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d"} err="failed to get container status \"6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d\": rpc error: code = NotFound desc = could not find container \"6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d\": container with ID starting with 6ce980486e5cfe8ce5b46d2fc2e2c696f839f6522d325409b9d342fcb7946e8d not found: ID does not exist" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.191038 4787 scope.go:117] "RemoveContainer" containerID="55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24" Jan 27 08:26:26 crc kubenswrapper[4787]: E0127 08:26:26.195011 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24\": container with ID starting with 55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24 not found: ID does not exist" containerID="55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24" Jan 27 08:26:26 crc kubenswrapper[4787]: I0127 08:26:26.195086 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24"} err="failed to get container status \"55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24\": rpc error: code = NotFound desc = could not find container \"55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24\": container with ID starting with 55fa4224f52fa3a2d1615124512c35885f3b19b8973b03aca7880ea9c9c7eb24 not found: ID does not exist" Jan 27 08:26:27 crc kubenswrapper[4787]: I0127 08:26:27.086850 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04443910-6fb3-4e50-983a-c00b04645e7c" path="/var/lib/kubelet/pods/04443910-6fb3-4e50-983a-c00b04645e7c/volumes" Jan 27 08:27:22 crc kubenswrapper[4787]: I0127 08:27:22.822470 4787 patch_prober.go:28] interesting pod/machine-config-daemon-q4fh5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:27:22 crc kubenswrapper[4787]: I0127 08:27:22.823118 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4fh5" podUID="f051e184-acac-47cf-9e04-9df648288715" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136073405024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136073406017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136066652016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136066652015467 5ustar corecore